They're not afraid of the idea of programming people.
When I worked there every week there would be a different flyer on the inside of the bathroom stall door to try to get the word out about things that really mattered to the company.
One week the flyer was about how a feed video needed to hook the user in the first 0.2 seconds. The flyer promised that if this was done, the result would in essence have a scientifically measurable addictive effect, a brain-hack. The flyer was to try to make sure this message reached as many advertisers as possible.
It seemed to me quite clear at that moment that the users were prey. The company didn't even care what was being sold to their users with this brain-reprogramming-style tactic. Our goal was to sell the advertisers on the fact that we were scientifically sure that we had the tools to reprogram our users brains.
Another way of describing this - they find people lose interest almost immediately, and so if you want to actually show a consumer something new, you have to get to the point with your ad.
I'm not sure that's a fair characterization of a policy that promotes ads that hook the user within the first 200ms.
200ms isn't enough time for significant information to be transmitted to a person and for them to process it. You don't 'get to the point' in 200ms.
That means that the way to the user's brain and attention is with some irritating little jingle, a picture of a bunny beating a drum, cartoon bears wiping their asses with toilet paper, a picture of a caveman salesman or a picture of an absolutely artifical thing that looks like food but isn't. Stuff that stands out as unnatural.
But that isn't enough. You gotta pair it with spaced reeitition. Let them think about this every time they take a shit in the office. Hammer them with the same shrill sounds and garish images on every commercial break. Or after every couple of songs they're trying to listen to on youtube. Or in institials that are algorithmically optimized to pop up in their feed as they mindlessly scroll looking for gossip about their neigbhours to scratch that social group animal itch in all of us.
Exactly, 200ms is rather different than 'get to the point.' Here is a 'reaction speed test' site: https://reactiontimetest.net/ for somebody who doesn't intuit what 200ms is like.
You will likely be unable to click the screen in response to a box turning green faster than 200ms. To hook somebody on something within 200ms is largely appealing to casino like stuff where every single jingle, color, flash of light, and other aspect of their games is carefully researched in order to maximize addiction on a subconscious level.
...which has been known for at least a century
Here is an important difference. A century ago, the predator (seller) and the prey (buyer) were on equal evolutionary terms. Each generation of humans on either side of the transaction came into the world, learned to convince, learned to resist, then passed, and some balance was maintained. In this century, corporations and algorithms don't die, but the targets do. This means that the non-human seller is continuously, even immortally, learning, adapting and perfecting how to manipulate. The target, be it adult, adolescent, or child, is, and will be ever increasingly, at a severe disadvantage.
Ah yes because trade secrets were never a thing at any of these companies. The companies always shut down when it's founding members died wiping out all the knowledge it had built up.
That is to say organizations have always had this edge on individuals.
Right, because we know that parents never pass down useful skills or life tips to their children, like skepticism of propaganda and advertising, and instead send their children into the world like sheep into a lion's den.
There might come a day when advertising is too flawless for a human mind to resist it, but we're not there yet.
Most of everybody thinks their behaviors and decisions are not meaningfully influenced by advertising. Companies spend literally trillions of dollars running ads. One side is right, one side is wrong.
And advertising largely relies on this ignorance of its effects, or otherwise most of everybody would go to much greater lengths to limit their exposures to such, and governments would be more inclined to regulate the ad industry as a goal in and of itself.
Advertising is just companies saying "This is what you can purchase from me - it's awesome - please consider purchasing it". I have managed hundreds of millions in ad spend for major brands. None of them rely on weird ad magic to persuade people secretly - just showing off different aspects of the product or service.
And only recently could be optimized in real time, individually, for each target. I remember when there was a big moral scare about "subliminal advertising". People were appalled that an ad on TV could manipulate you without your awareness. That is 100% the business model of modern social media advertising.
It's not embedded in a specific ad, but the entire operation of the promotion algorithms.
Users as prey is a terrifying but not unrealistic narrative. Thanks for sharing.
The only business that call customers "users" are software development and drug dealers :)
Facebook employees may be the easiest prey to program
If something as crude as flyers in bathroom stalls is effective
FB uses its addict money to pay those employees. I assume the pay is what’s effective. Actually a good business model. Pay employees to improve how addictive your drug is, get more money from addicts, and use that to pay your employees more money, completing the loop.
But then drugs being profitable isn’t really news
It also says a lot if that's the most effective way vs normal ways of disseminating the info.
I haven't worked at FAANG so maybe I'm out the loop, but flyers on bathroom stalls seems bizarre, like almost less of a corporate action and more of a personal one (like you might get for unionisation), but with all the messaging of corporate, like something you'd see in a company memo.
Like I say, maybe everyone else is accustomed to this idea, but if you have any pictures of them I think a lot of people would be interested in seeing it, unless I'm misunderstand what it is
It started as Testing on the Toilet, which was an effort to get people to actually care about unit-testing their code and software quality and writing maintainable code that doesn't break in 6 months. Later was expanded to Learning on the Loo, general tips and tricks, and then Testing on the Toilet became Tech on the Toilet. It's been going on for a good 20 years now, so that's about 1000 articles (they change them out weekly) and there aren't really 1000 articles you can write about unit testing.
The insight is actually pretty similar to Google's core business model: when you're going to the bathroom, there isn't a whole lot else you're doing, so it's the perfect time to put up a 2-3 minute read to reinforce a message that you want people to hear but might not get attention for otherwise.
It's not really a FAANG thing. I bet you've seen the memes about X days without a serious accident, or without stopping the production line. It's the equivalent in a restroom or a urinal: A place you can make sure people see key information. You can find this in many industrial sites. A call center might have reminders of core principles for how to close calls quickly, or when to escalate. A lab might have safety tips. A restaurant will remind you of hand washing. An industrial site of some important safety tip or two.
While I've not seen this in every single place I've worked, it's very common.
You're right that it was just other employees who decided what to print there. But I don't think that absolves the company (Facebook) really... Everything a company does is just things that its people do! Nothing about the flyer was outside the parameters of the job of its maker. Their job was to make the company money by helping advertisers maximize ad revenue, and that's exactly what they were doing.
Facebook had a serious internal propaganda arm when I was there. Couldn't manage to get floor length stall walls in most of the bathrooms, but every stall had a weekly newsletter about whatever product stuff.
Every high traffic flat space on the wall would be covered with a poster, most of them with designs lifted from US WWII propaganda, many hard to tell if satire or not. I was surprised there was never one about carpooling with der füher.
I can say at Google we usually just had engineering tip posters in the washrooms they were usually very insightful and just written by other engineers at the company.
Stuff like how to reduce nesting logic, how to restructure APIs for better testing, etc.
People usually like them. I can't say I've seen what the parent post described so I imagine it's "the other" FAANG mentioned here.
Yep, I frankly thought Testing On The Toilet was pretty great.
That and nice washing toilets.
Alright. I may object to the wording, but ... isn't what you described also a good website? I am aware of how much propaganda Google uses too, e. g. "engage the user" - you see that on youtube "leave a like". They are begging people to vote. Not for the vote, but to engage him. I saw this not long ago on Magic Arena by Wizards of the coast. They claim "your feedback is valuable" but you can only vote up or vote down. That's not feedback - that is lying to the user to try to get the user to make a reaction and tell others about it. I just don't really see the difference. You describe it that they manipulate people, but ANY ad-department of a company uses propaganda and manipulates people. Look in a grocery, how many colours are used in the packaging. Isn't ALL of this also manipulative?
Google doesn't beg you for likes. Channels beg you for likes because it's one of the metrics they are stack–ranked by. Someone will lose, and they don't want it to be them.
Did you take a copy of that flyer? I would be interested to see it.
I looked and I do not have a photo
Don’t be evil
The public got a peek at it with Cambridge Analytica creating hundreds of thousands of personality profiles, they then used to create Trump's MAGA army of flying monkeys. The Democrats could have done something about it, and made it illegal, but instead they just decided to build there own armies of flying monkeys. Why? Because both sides are bought and paid for by the same rich parasites trying to reprogram us.
did democrats create flying monkey armies? I haven't seen anywhere near as much Democrat propaganda as Republican, which is probably why they keep losing. Only recently, once Republican policies came into effect and people experienced their consequences, did Republican votes decline.
To me this is simply a consequence of the capitalist mode of production.
Yes, because governments are so restrained in their use of propaganda.
What it is is the consequence of the power existing. 200 years ago nobody was arguing about how to hook people in the first 0.2 seconds of video, but it's not because nobody would have refused the power it represents if offered. They just couldn't have it. It's humans. People want this power over you. All of them.
To be fair, it is basically one and the same. I doubt most people railing against capitalism are actually against private property. They probably dislike corporatism which only exists as an extension of the government. Very very few of us voluntarily gave up our right to hold people personally responsible for their actions, but this is forced on everyone on behalf of business interests. The corporate vale is materialized from government alone.
> I doubt most people railing against capitalism are actually against private property. They probably dislike corporatism which only exists as an extension of the government.
I really don't know. In my experience, it can about private property when talking about housing, it is about markets when talking salaries and work conditions, and it's just about having no idea of what capitalism even is and just vaguely pointing at economics the vast majority of the time.
"Capitalism" can be safely replaced with "the illuminati" or "Chem trails" in the vast majority of complaints I hear and read and the message would ultimately make as much sense. There's not a lot of how or why capitalism doesn't work, but by God there sure is a lot of what it seemingly does wrong.
Who said anything about government? I thought it was humans and people?
> All of them.
At least an unhealthy amount of them. I have no desire to have power over people, except I would like it if my kids actually listened to me...
Well adjusted people so not want that power over other people
It's sociopaths and narcissists which want it.
And as Atlas667 pointed out, it's also a direct consequence from a capitalistic world view, where it has replaced your morals.
This is not in relationship to state propaganda. Multiple things can cause abhorrent behavior, and just because we've identified something as problematic doesn't inherently imply that other unrelated examples are any better.
"Well adjusted people so not want that power over other people"
There are certainly well adjusted people that would like to fix things they feel are inefficiencies or issues in their government, especially when those issues are directly related to their areas of expertise. Thinking well adjusted people wouldn't want to be in a position of power is exactly how you ensure that only bad people end up with power.
Power seekers acquire power, not knowledge seekers. This is from Plato’s The Republic so about as old as it gets.
We've always had sociopaths and narcissists, and if you're looking to "capitalism" as the reason why they exist, you're in out-and-out category error territory, not-even-wrong territory. Now that this power exists to be had, human beings are racing to acquire it. If you think you can fix that by "fixing capitalism" you are completely wasting your efforts.
So if that’s not the answer, what is? Should we just throw our hands in the air and say that technology has defeated our monkey brains, and there’s no going back?
Given that these tendencies are not evenly distributed throughout the population, you can have structures that leverage the large mean to mitigate the worst tendencies of the extreme tails. Given that the natural state of things is that power begets more power, these are harder to build and maintain, but it can be done. In particular, Democracies and Republics are major historical examples of this.
[deleted]
Is your name Epson? Because you’re really good at projecting.
Your comment speaks droves about you, not humanity.
History contains abundant, well-documented cases of ordinary people participating in atrocities without coercion. Most people will act decently in low-pressure environments and will act badly under certain incentives, authority structures, or group dynamics. There is no way to know what a person's threshold is until it's tested, but it can be assumed that most people have a low threshold.
Parent and was implying “all” humans crave this power over others. This is patently false.
“Most” people won’t act badly to attain this power, “some” will. Being placed into a position and choosing harm is not the same as pursuing it.
That is absolutely against the evidence, but yes people do like to think they are naturally righteous and good.
What evidence is there that ALL humans crave power over other humans?
That may be true but I think the unspoken assumption in your comment is that somehow, without capitalism, greed magically melts away. How do you explain the constant extreme rampant corruption in communist and socialist countries over 100 years if not from GREED?
I know that it doesn't. Greed will be ever-present, yes, but that doesn't mean that it's a one-way ratchet. It's something we have to keep fighting against all the time. Greed starts out as a driver of progress, then eventually becomes an impediment to progress. The other constant there is progress! No dam will block a river forever.
Capitalism or consumerism, a never ending offer and demand for goods, material or immaterial?
Large portions of the tech sector thrive off the attention economy. If your goal as a product is to have someone spend hours a day everyday engaged with your product, and you focus on a data driven approach to maximize the time spent on the app, then you’ll create something not dissimilar to addiction.
Supposedly the people working for these companies are "the brightest of the bright" but if they didn't even notice that this was what they were contributing to, what kind of intelligence is even that? Not everyone working there could possibly be so socially inept that they didn't realize what they helped building right? Or are we chalking it down to just missing morals? I feel like I'm missing something here to properly understand why people ended up working for these companies in the first place, even before it started making the news.
Its basically impossible for them to not notice. I know someone who is a software engineer for lockheed. He told me that back in the 90s he wrote a bunch of software for a missile. He wasn't told that is what he was working on, it was all classified, and part of that is you only know what you need to know. But from the specifications and how the math worked, it was very clear to him that it was a surface to air missile. After the fact, it was confirmed that is what he was working on.
Google and Meta are surely more open than a classified missile project. So it would really be beyond the pale for someone to not realize that what they are working on is an additive platform, sure I am willing to bet they didn't say "Addictive" and instead cleaned it up in tidy corporate product management lingo, "highly engaged users" or something like that. But its just impossible.
It’s simpler than that. Engagement increases are the perpetual goal. The vector is constant. Relative motion is all that matters.
Nobody would talk about whether the product is now “addictive” because that suggests crossing a finish line to completion, and we can’t ever be done.
It is interesting that Software Engineering as it's practitioners like to call it, is unregulated.
If you want to be an accountant, lawyer, surveyor et cetera, one has to learn about ethics, and violating ones professional institute's code of ethics may result in you being unable to practice in future.
Professional engineers are required to consider the interests of the public in their work, have an obligation to reject unethical or harmful instructions and are regulated by their professional organization to support competency and address malpractice. Much of this was driven over the past 50-100 years as society determined that they wanted things built by engineers to not kill people or have material deficiencies following construction.
From my understanding, software engineers are a long away out from this still but perhaps we'll get there once the dust settles on more of these sorts of lawsuits.
The dust will never settle because once people try to regulate they can basically move software engineering in its whole somewhere else. Something great about being active in multiple places is the fact that these companies have leverage. There's not just a cost advantage to having amazon in luxembourg, just employ a few thousands (10 000 jobs are linked to amazon in luxembourg) and you can block votes in europe (because of veto power). 10K jobs is nothing for amazon but is 2% of all jobs in luxembourg.
Same way amazon being big in india isn't just great because of the vast talent pool and 'low' costs in India (even if many if most indian programmers are subpar, they got over a billion people), they basically ensure that the government in India can never turn against Amazon, because these jobs are concentrated in a specific region and India isn't a unified state. Amazon can try many getting into many different things in India without having the risk associated some small foreign company breaking into India would have.
> basically move software engineering in its whole somewhere else.
You don't think that is true in other professions? You don't think I could get my accounts done in India, or a bridge designed in China? The regulatory environment in my country would still apply. Your answer is just exceptionalism
There's no need to have software engineering be regulated. It'd be a restriction/deterrent at the wrong level.
In order to fix this we need the individuals in charge to be held legally accountable without hiding behind a corporation.
In the software industry management rarely ever listens to concerns brought up by engineering even if it's technical concerns.
It's extremely embarrassing that my (American) employer refers to me as a "software engineer" when in fact I dropped out of the university computer engineer program and can not legally call myself an engineer in my country.
I would just as soon call myself a software doctor or software lawyer. Or software architect.
We don't even need formal regulation to start — just honest internal conversation. I work in tech and most teams I've been part of never once discussed the ethical implications of what we were building. Not because people are evil, but because the incentive structure doesn't reward asking "should we?" — only "can we ship it?"
The gap isn't education, it's accountability. Engineers building engagement loops know exactly what they're doing. They just don't have a professional body that can revoke their license for it.
Your comment is saying two very different things?
> We don't even need formal regulation to start — just honest internal conversation
> They just don't have a professional body that can revoke their license for it.
What internal conversations could lead to a professional body that can revoke anyone's license? I'm sorry, but your comment doesn't make much sense.
Edit: Dammit, I realize now I think I fell for vibebait, leaving for posterity so others don't fall into the same trap.
Fair point — I contradicted myself. What I meant is: the first step doesn't require waiting for regulation (just have the conversation). But long-term, some form of professional accountability would help. Those are two different timescales, not alternatives. I wrote it badly.
And no, not vibebait — just a poorly structured comment from a guy with a fever typing on his phone.
I am amazed that I’ve never considered this before
Have been surveying Computer Science courses at university with my son recently. All the ones we looked at had a compulsory ethics module which shows the direction things are headed at least.
I wonder how many programmers working today are coming through universities though? I'm self-taught, most of my programmers friends are as well, same with most of my colleagues back when I worked. I can remember maybe the name of 3-4 people in total, out of maybe ~30 or so, who went to university for computer science before they started working.
In my experience CompSci ethics modules are about hacking or mishandling user data or code theft... i.e. things that companies don't want their employees doing.
I've yet to see an ethics module that covers ethics from the perspective of ethics over profit.
Whereas an accountant is taught that they should resign rather than get involved in unethical practices, like profit manipulation for example. I interview people with ethics questions. I discussed them frequently when training.
I refused the pressure to be unethical when I was pushed, even when I knew I would be fired (which I was). I was able to discuss it with old mentors, who made time to meet with me, even when I hadn't worked at their company for years.
Lastly I disclosed why I was fired at interview for a new job (without the confidential details), and was hired partly on the strength of it by a person who had been through much the same.
And I didn't learn it at University, I learnt it on my professional qualification, that was around 3 years long and was postgraduate level, although had non-degree based entry routes for technicians. It also required a wide range of supervised experience.
This was not at all the ethics program that was taught in my university computing ethics course. They did indeed cover the societal and moral responsibility of software developers. This was way back in 2002.
Mine had one over a decade ago. After graduating, the industry decided that developing everything we just got done establishing was unethical, was the hot topic to innovate for the next decade. I never worked at any of those places and still got burned ethically in much more indirectly unethical product streams in the finance and insurance sectors. To be honest, if there is really good money to be made at this point, there is a safe bet that if you dig deep enough, there is an unethical core to it. Most of my peers assuaged themselves with some variant of "I'm a programmer, not an ethicist, and philosophy doesn't put food on my table. So sadly, the problem seems much more systemic and a priori to the capitalistic optimization function.
This is a comment that my reaction is different based on your age. If you're older, I'd be more disappointed. If you're young, I'd be more sympathetic. However, the careers mentioned by GP all require schooling where those ethics courses can be taught. In "Software Engineering", so many people are self taught or taken boot camps without formal schooling. The SE title is just a joke to me knowing that it is so overused and given to people that clearly are not trained as an engineer.
Maybe we should have Gavin Belson's Tethics be more widely taught???
Whereas accountants, lawyers, civil engineers and surveyors have to do postgraduate training with their institute to become chartered.
Interestingly many accountants in the UK never did a degree (very many more did a degree in something unrelated), but came through the technician route of evening, weekend or day release study. Many do their chartered training at weekends.
We have separate words for intelligence and wisdom for a reason.
Intelligence is not particularly correlated to ethics or morality. Probably sounds obvious when I say it directly, but it is clearly something that you have banging around in the back of your mind. Bring it forward out of the morass of unexamined beliefs so you can see that it is clearly wrong, and update the rest of your beliefs that are implicitly based on the idea that intelligence somehow leads to some particular morality as appropriate.
Because nobody is clocking in and willfully contributing to the addiction machine. They're completing an 8-point ticket to integrate a new scroll-tracking library, or a 5-point ticket to send an extra parameter to the logging system. When there's thousands of people working on a product, nobody feels like they're doing anything impactful.
> Because nobody is clocking in and willfully contributing to the addiction machine.
Are people really not aware of what the company's overall mission, product and impact is? I'm finding that hard to believe. If you accept employment at Facebook, regardless of what department you're in, you know exactly what kind of company you're contributing your time, energy and effort into.
I joined Google Analytics in 2018 and had no idea that Analytics really meant "Tracking and Remarketing" until about 3 weeks into the role. At that point, what're you going to do, quit? I knew it wasn't what I wanted to do, but it took two years to get out cleanly.
> At that point, what're you going to do, quit?
Yes? Why not? If I'd join a company and figured out what I did actually harmed more than helped, I'd leave that place, absolutely. I'm a software engineer, even with the lowest possible position in a random company I'd earn better than most people in the country and live a better life, even just the bottom 30% of earners in software in the country (not counting outsourcing obviously). Especially at that time it was very easy to find new jobs.
Good for you. I've got a family and no other source of income.
You think Google is the single company out there who is willing to employ you? How come?
Edit: Thinking about it, your comment actually made me more frustrated than I realized. I've been poor enough to having to be homeless at some points in my life, and yes, I've worked for immoral companies too, because I need food and shelter. But once you move up in life to the comfy jobs like software engineering, you don't have any excuse anymore that it's just about "feeding your family" when you literally have a sea of jobs available. It might be an excuse you tell yourself to justify your reasoning for getting paid more, but if you truly did care about it, you'd make a different choice, and still be able to feed your family, and I'm almost confident your family would be OK with you making that choice too, unless they also lack the empathy you seem to be missing.
You were homeless and didn't have a choice, so now obviously you're qualified to give assurances that essentially, "it is unlikely that your family will starve", right? /s
And if you're wrong, and shit hits the fan for whatever reason, who's going to fix that? You? No, he's going to have to fix that, because nobody else is going to step in.
It's easy to tell others that it's going to be OK, but put your money where your mouth is. Put $1M in a fund that he can access should he no longer be able to find employment. Then he'll have absolute certainty that it's going to be OK.
Something tells me you're not going to do that. Something tells me that what you would do if shit hits the fan, is you're going to tell him that he should find solace in the fact that while he's working for 1/5th of his former total comp, putting in more hours at the same time, seeing his kids less, not putting his kids through private school to give them the best chance at the best education, that, at least, some kid out there isn't watching 6-7 videos on the tablet that their parents use to do less parenting.
> You were homeless and didn't have a choice, so now obviously you're qualified to give assurances that essentially, "it is unlikely that your family will starve", right? /s
Yes, again the context is software engineering, the floor of what we earn as software engineers is above what other careers has as their maximum, and if you've been a developer since 2018 (almost ten years of experience) you're not having a tough time finding a new job, especially if you were at Google.
People get comfortable with their new living standards, that's natural. But they said they were able to get out, just took time, I'm guessing that's about vesting something, not because it's hard to find new opportunities.
Sure, but you could say that about anything. If you're American, then your labour is paying for concentration camps no matter where you work. In a company of 100k+ people, responsibility is diluted.
The problem is, between producing cigarettes, weapons, disposable fashion, sugary food and drink, disposable vapes, extremely wasteful cars, addicting game mechanics, many of the financial "product"s, ad optimisation, ..., not everyone can avoid immoral but legal work whilst trying to exist in this economy.
> not everyone can avoid immoral but legal work whilst trying to exist in this economy
let's be honest
everyone working as a software engineer at facebook is perfectly capable of finding employment elsewhere
working there is a deliberate decision to prioritise comp over the stability of the world
agreed about this group in particular.
> not everyone can avoid immoral but legal work whilst trying to exist in this economy
We're talking about software engineers here, not "cleaner taking up any job you can". Literally one of the most well paid jobs considering the amount of effort you put into it. People slave away on fields picking berries for less, with more impact on their life expectancy, if there is any career you can almost jump between jobs in just a few weeks, software engineering is one of them.
I think it's a fair point to say that many people do not feel as if they're the ones responsible even if they're direct contributors.
Fatten up that wallet with 500K a year and tech stock RSUs and people pretty quickly forget about their morals. Seriously, they tell themselves the same story: "ah this is just temporary. I can make big money for a couple years then get out." But 2 years turns to 5, then they buy a house in the Bay area and now they're stuck. Same thing for Seattle.
Many people get used to the paycheck before they really discover the extent of their predatory practices. A lot of people will choose their own comfort and stability over morality.
Intelligent does not mean moral.
Typically, intelligent people get so much joy out of being able to do something (such as addicting the masses), they do not stop to think whether that's a good idea. Especially when that's the very thing that's fuelling their extremely lavish lifestyle.
A lot of folk justify it to themselves.
I've heard "well, you have to change things from the inside" before.
And a lot of people have been there for a while, it wasn't always... quite as bad even if a lot of the warning signs were absolutely there.
I was actually just thinking to myself this morning that I literally have no idea what these feeds look like at this point, but more and more people seem to be looking at me with envy when I say I don't have any lol. I'm kind of curious and might ask my friends if I can see what they're looking at day to day if they'll show me.
250k per year would make me reconfigure my internal right/wrong classifier real fast...
Really, that little? Don't feel even slightly embarrassed about your morals being so cheap? You'd hurt your neighbors and acquaintances for 20K a month?
Given you probably don't earn that today, say you got paid that now instead of whatever you earn, what would you spend that money on in reality?
I would feel embarrassed, yes. But that's 5 times my current salary for a 'similar' position. I am not sure it'd be 5 times worse in terms of societal effect. And even if it were, I am not sure I would be 5 times as embarrassed, if we are considering a linear conversion rate.
What I am trying to say is that I am on your side - as of this moment it is incredibly unlikely that I would ever see this kind of money. That makes it an easy position to take in a online conversation. But I have seen decent people throw out morals for a 100th of what we are talking about.
if you're literally anywhere in the west other than bay area or NYC this is good to amazing money
Sure, but what about all the other aspects of your life, those contributing more to your happiness? Corrupted people have money as their top goal in life, everyone else is trying to live a good life once they have enough, but there seemingly is no "enough" for quite a large part of the population, and in some places of the world this obsession seems worse than in others.
Hate the game, not the players. Somebody is supposed to be regulating this stuff. If you're in a poor city or country having a shot at such compensation would be life changing for the whole family, not just you and game-theoretically someone else will take that job anyway, for similar reasons, too.
That's a shame, but don't feel sad. While not everybody is on your side, certainly you're not alone in this.
I've left jobs paying that much over ethical concerns. My soul is not for sale, and neither should yours be.
At this point I would be more worried about working for a US company, than which one exactly - (not totally serious of course, but also not entirely inaccurate)
> My soul is not for sale, ...
This makes two of us. Nice to see a similar-minded person. Cheers to you!
People are motivated by money, and the aspects of the job that aren’t toxic.
> I feel like I'm missing something here to properly understand why people ended up working for these companies in the first place
Money.
in the words of Upton Sinclair "“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
This affects the brightest of the bright and the less talented alike.
I believe the quote is, "it's difficult to get a man to understand something when his salary depends on his not understanding it"
> Or are we chalking it down to just missing morals?
Surely it's this, right? I just had what I would consider an intelligent conversation with someone wherein we eventually settled on a core ideological difference between us is that I thought all humans have equal value (infinite and immeasurable), while he believed a human's value is only as much as said human can generate money within capitalism (basically, if their salary or net worth is low, they must not be very valuable people, and we shouldn't do things for them like give them healthcare).
I think it's a bit of a dangerous fallacy to assume that intelligence naturally leads people to arriving at your own personal ideology. There were plenty of highly intelligent Nazis or Imperial Japanese. They either didn't care about the regimes they supported or leveraged their intelligence to rationalize it (requiring fallacy to do so of course - or perhaps not, if they really did just want their subgroup to dominate all others and believed it was possible to do so).
For me it's not smarts alone to define my value system. It can't be purely rationality, since the premise of deciding good and bad is subjective and dependent on what you value. You can argue these things rationally and use logic to determine outcomes, but at the end of the day there's a messy human brain deciding good/bad, important/not important, relevant/not relevant.
Hey emsh! So what I had written was quite long and I have been writing it since your comment was 6 minutes ago.
https://writeforfun.mataroa.blog/blog/the-brightest-of-the-b...
Essentially a thought dump. Hope you can read it and we can discuss it.
Have a nice day!
While I appreciate the mention and the flattery, I'm just a person, my thought are not special and I'm starting to feel like you might be better off not focusing on what specifically I am saying :) Again, I do appreciate it, can't deny it feels nice initially, but don't treat me like I'm special, I'm not, I'm just sharing stray thoughts sent out the ether like everyone else.
haha, Sorry but I didn't intend to flatter you (atleast this time although your work in oaoh is commendable) but I literally just wrote it as a (blog?) because I was already writing a comment and I didn't want a massive wall of text & also you can see from my other project that I wanted to write more as blog posts rather than HN comments when they get too big.
I wrote the 6 minute mark to think of how long it took me to write the comment which was around ~50 minutes ish. And I mentioned you many times in the comment-ish because I had written something first and then wrote something on top of it & thus many initial mentions.
Anyways I have now removed the mentions and honestly a lot of this is just transparency efforts.
I literally just write whatever I am thinking :)
At the end of the day these brainiac innovators still just chase money and tail.
[dead]
In a perfect world, when you realise that your company creates and fuels addiction in children, that company should be concerned about having resulting profits seized (fully!) and responsible decisionmakers criminally prosecuted.
I would argue that we fail completely at doing this (historically, too, see e.g. leaded gas).
This incentivizes companies toward net-negative behavior until it is fully regulated despite knowing better, because it is clear that it won't be really punished anyway and remain a net-positive for them.
It is a difficult problem though.
I am all for it, I do not think Mark Zuckerberg deserves any of the billions of dollars he has and he has contributed nothing to society in return for that. On the contrary, everyone knows his contribution has been a net negative but our systems do not accurately reward positive contribution, or disincentivise the negative.
It has been frequently demonstrated that capitalism is terrible at pricing in externalities. In the U.S. it used to be industrial waste in the environment; now it's climate change and addiction and culture wars.
Without discounting social media's harmful effects:
I do think Instagram in particular has been a boon for small businesses, providing them visibility in the marketplace that was previously unavailable to them.
Social media has also been a way for communities to connect organically with discoverability features missing on the old web.
There are positives and negatives - if it was only negatives people would be quicker to abandon the platforms.
Although I'm not familiar with the case at hand, I agree there's potential there for real harm, especially to children.
Upvoting you because all the Zuck worshipers and meta stockholders downvoted you for speaking the truth.
The guy is worth a quarter trillion dollars and doesn't seem intent on calling it a day, and insists on destroying society's youth so he can make more money. Intelligent or not, that's a mental disease.
Imagine having that sort of money where if 99% was taken away, you'd still have over 2 billion dollars to your name....and you refuse to just walk away and focus on things like your family, making the world a better place, or just enjoying your life. Tom took the money for MySpace and actually seems to enjoy time with his family, traveling, doing photography, etc.
For all his (many) faults, Gates took a look at the polio virus and said "I'm bigger than you" and pretty much spent until it was wiped out. Doesn't counteract the bad or the Epstein stuff at all, but wiping out polio has helped people.
Mark's done jack shit to genuinely help people besides his shareholders and his immediate family. One might argue that his whole bunker thing is an indicator that he's realized he's done tremendous damage to society, but instead of fixing it, he's insulating himself for when the proverbial bomb goes off.
Reminds me of a Judge Dredd story from the 80s.
A confectionary company invented a type of bubble gum (called "Umpty Candy") that became addictive not because it had any drugs in it, but because they kept optimising the taste until it became too delicious to refuse.
"the bliss curve" [1]
I was thinking Robocop had the relevance factor these days, though I do enjoy the aesthetics of Dredd.
Potato chip brands spend enormous sums of money on flavour engineering for that very goal.
fun fact: the people working for those companies, even though the attribute they are optimizing for is "addictability", they call it "snackability". Probably b/c of motivation and ... legal.
additional fun fact: Philip Morris (yes that one) controlled the majority of these companies and brands during the 80s & 90s.
[deleted]
I mean engagement is the game. The overlap of other mediums like TV, movies, music, gambling clearly have the same focus, though they could only wish to have the same death grip that social media has been tuned to achieve.
So many problems on the internet stem from products trying to be “free” and funding themselves with ads.
I’m starting to think that we need to push for more of the internet going behind paywalls, which is weird because I’ve always been somebody who claimed to hate walled gardens and supported “information should be free”.
Many of the products, while they do provide value, aren't providing services that are attractive enough in their own right, to generate multi-billion dollar companies. Facebook is pretty much a niche product, Instagram provides maybe a little entertainment, but with out the addiction part, it's not really worth as much as Meta shareholders would like.
Same with search, or AI, clearly there's value, but it's hard to become a $1T company, while still be ethical. We need the world to be okay with much much small, less valuable tech companies.
Bullshit Asymmetry.
The ease of creation of digital data has lead to the creation of an infinite sea of bullshit. Ads/attention economy are just the newest layer of this asymmetry. Curated datasets are a solution to the problem, as this was how old media worked. The problem is then how will it be paid for.
"[T]hrive off the attention economy"? What a sinister way to describe building products that people want to use to connect to people whose words and images they enjoy. Nobody is pushing drugs here. There's no fraud or deception. The whole situation is Alice not liking the medium Bob and Charlie use to communicate and what they say to each other over it. Alice needs to mind her own business. She doesn't get to use the power of the state to separate Bob and Charlie just because she's indignant.
When you define "addiction" as anything people who at a level you consider excessive, the word expands to cover every domain of life and so becomes worse than useless.
I’d argue that just because there’s no clear indication of fraud or deception immediately apparent doesn’t discount the reality that much of society has become dependent on their phones.
It’s pretty clear it’s designed that way—otherwise, its effectiveness wouldn’t be nearly as troubling as it is.
Advertising absolutely has overlap as of that of propaganda, and engagement remains the central focus of the millions of apps that populate stores and devices (along with the constant stream of ads that accompany them).
Working in transitional housing brings a unique perspective that’s often unshared with the vast majority of everyday people. When you do this for a time, you start to recognize patterns and the overlap in environments around you. In the case of addiction, it certainly applies to a whole swath of life that most never notice.
Not to argue too much because I agree with you, but it bears mentioning that many of these companies absolutely study the techniques employed by casinos et al and now you can see sports gambling using techniques refined by social media companies. The dialogue there is very damning when it comes to assessing whether they’re being deceptive/bad actors.
You could describe drugs the same way, no? building products to connect people to substances they enjoy? There would be no fraud and deception too.
This is not about Alice liking or disliking it. This is about allowing Mark to engineer a system where statistically too many Bob's and Charlie's can't refuse (for the same reasons gambling is more common in poor communities), making the society worse off at a result.
[deleted]
How is it sinister to describe what it is? The industry literally uses that term. Their entire goal is to maximize the amount of time you are on the screen by grabbing your attention with every single lesson they have learned from decades and billions of dollars of research, almost universally in service of throwing ads up in front of you. More time = more ads = more revenue.
It is not a fair fight and to act like this is anything less than a corporate-run legal addiction machine is way too generous to these companies given what we know now. Sometimes I feel like people only consider something addictive if it involves slot machine mechanics or an actual narcotic. But we know now it’s much broader than that.
Your argument held water in 2010. Not in 2026. We know better now.
Will there be any outcome to remedy the situation from this even if actual harm is proven to the letter of the law?
Seems not so far back the Sacklkers were proven(?) to have profited and fueled the oiod crisis while colluding with the healthcare industry - and last i heard they were haggling over the fine to pay to the state. While using various financial loopholes to hide their wealth under bankrupcy and offshore instruments.
What then the trillion dollar companies that can drag out appeals for decades and obfuscate any/all recommendations that may be reached.
And before that, it was the crack epidemic—and cigarettes before that. None of this is new tho, just the medium.
I recall Philip Morris pivoting their main business when it began hemorrhaging money. Essentially this pivot came in the form of becoming the largest “box-to-mouth” food producers in the world, of course applying the same addictive principles that fueled their tobacco empire to maintain profitability.
I doubt, however, social media corps like Meta will follow suit today—mostly because accountability feels more like a suggestion these days.
I'm not at all familiar with the American justice system, but would the fact that this case specifically describes the targeting of minors with such addictive tactics change things at all?
Not really, cigarette companies targeted minors, same with alcohol, they got slapped on the wrist. Worth mentioning this happened during the rise of a neoliberal policy order of checks & balances.
[deleted]
> "This case is as easy as A-B-C," Lanier said as he stacked children's toy blocks bearing the letters.
> He contended the A was for addicting, the B for brains and the C for children.
I gotta admit, I find this really trivial and silly that this is how court cases go, but I understand that juries are filled of all sorts of people and lawyers I guess feel the need to really dumb things down? Or maybe it's the inner theater kid coming out?
Jury selection weeds out the enthusiasts who want to be on a jury, the people who can manage to get out of it, and the people who have too much domain knowledge related to the case.
The lawyers are performers in a play, to some extent. Theatricality can pay off, in the right amounts.
> Jury selection weeds out the enthusiasts who want to be on a jury, the people who can manage to get out of it, and the people who have too much domain knowledge related to the case.
In my (extremely limited) experience, the latter two are probably true but not necessarily the first one. I've been called for jury duty exactly once so far, and it happened to coincide with a period where I wasn't particularly happy with my job situation and was pushing for some changes with my manager, which made me motivated to try to get picked so that I could stall a bit to see if my situation changed. As far as I could tell, almost everyone in the room full of like 40 people who were in the pool for the civil trial they put me in the room for first was trying to get out of it, and I ended up being the first person picked (out of I think 8 overall; there were only six jurors needed for this trial and if I recall correctly there were two alternates). It genuinely seemed to me like the lawyers were basically happy to have someone who actually wanted to do it rather than have to force someone to go who wasn't going to want to actually pay attention or take it seriously.
My guess would be that they don't want someone who's enthusiastic because they have a particular agenda that's against the verdict they're looking for. If you're a prosecutor, you're probably not going to want to pick someone who's obviously skeptical of law enforcement, and if you're a defense attorney, you're probably not going to want someone who's going to convict someone because they "look guilty". I'm not convinced that someone who really wants to be on a jury because they thought it looked fun on TV or something but otherwise doesn't have any clear bias towards one side or another would get weeded out, especially for most civil cases where people probably won't have as much concern about either letting a guilty person go free or putting an innocent person behind bars.
I get the reasoning behind that kind of jury selection, but yeah it seems this would also select for the most gullible people to be in the jury - especially if you want people without domain knowledge.
That's what both the defense and the prosecution are looking for.
The same will happen with expert witnesses; both bring in people willing to say virtually anything, for the right pay.
Ok, but at least expert witnesses are constrained by the basic state of science in the field: They can certainly have a biased opinion but they can't go against established knowledge - and the other party can also interrogate them and try to show holes in their argumentation.
Whereas for jury members, the only people who could do that are other jury members, who would be just as clueless.
(I get that you don't want a jury with wildly different levels of domain knowledge. e.g. if you had one "expert" and the remainder being laymen, the expert could quickly dominate the entire jury - and there would be no one there to call out any bias from them)
> Ok, but at least expert witnesses are constrained by the basic state of science in the field: They can certainly have a biased opinion but they can't go against established knowledge
How can you tell if you're not also an expert?
> the other party can also interrogate them and try to show holes in their argumentation
Yes, and when the science is beyond the experience of the jury, experts giving opposite opinions will be as hard to distinguish as conflicting non-expert witness testimony (or even the testimony of the defendant compared to the accuser or litigant).
> at least expert witnesses are constrained by the basic state of science in the field
This is absolutely not the case.
> and the other party can also interrogate them and try to show holes in their argumentation
Sure, and now the jury - with zero domain knowledge - sees two very confident sounding experts who disagree on a critical point... and you wind up with it coming down to which one was more likeable.
[dead]
case in point - rounder corners worth billion(s)
These were the opening remarks of the lawyers on one side. You’re right that it’s theater, because they’re trying to hook the jury with an idea and get it to stick.
It’s also not a great sign that they’re relying on such tricks and props to hook the jury. In stronger cases they’ll rely on actual facts and key evidence, not grand but abstract claims using props like this.
I don’t know how the rest of the opening remarks went, but from the article it looks like Meta’s lawyers are already leaning into the actual evidence (or lack thereof) that their products were central to the problems:
> Meta attorney Paul Schmidt countered in opening remarks to the jury that evidence will show problems with the plaintiff's family and real-world bullying took a toll on her self-esteem, body image and happiness rather than Instagram.
> "If you took Instagram away and everything else was the same in Kaley's life, would her life be completely different, or would she still be struggling with the same things she is today?" Schmidt asked, pointing out an Instagram addiction is never mentioned in medical records included in the evidence.
Obviously this is HN and we’re supposed to assume social media is to blame for everything, but I would ask people to try not to be so susceptible to evidence-free narrative persuasion like the ABC trick. Similar tricks were used years ago when it was video games, not social media, in the crosshairs of lawyers looking to extract money and limit industries. Many of the arguments were the same, such as addicting children’s brains and being engineered to capture their attention.
There’s a lot of anger in the thread about Discord starting to require ID checks for some features, but large parts of HN aren’t making the connection to these lawsuits and cases as the driving factor behind these regulations. This is how it happens, but many are cheering it on without making the connection. Or they are welcoming regulations but they have a wishful thinking idea that they’re only going to apply to sites they don’t use and don’t like, without spilling over into their communication apps and other internet use.
> Meta attorney Paul Schmidt countered in opening remarks to the jury that evidence will show problems with the plaintiff's family and real-world bullying took a toll on her self-esteem, body image and happiness rather than Instagram.
I feel like this must be an indication of an inherent flaw with a society designed around the idea that litigation originates in an individual's singular harm received from a company, outside of I guess class action lawsuits, which to be fair, I don't know much about. But I'm reminded of the McDonald's coffee case, when McDonald's was able to leverage their incredible capital power to make that woman look like such a crazy litigious hysterical lady that people to this day use it as an example of how Americans are just inherently trivially litigious, when in reality, that coffee was just way too hot.
Nobody can stand up to the might of a trillion dollar company. The resources they have at hand are just too vast.
>Nobody can stand up to the might of a trillion dollar company. The resources they have at hand are just too vast.
Which is why Americans need to curb the lobbying power and communications power of trillion dollar corporations, and limit the rights corporations have versus the rights of human citizens.
Or if a company is too big to be held accountable, it needs to be broken up aggressively.
I blame the consumer, but I think it's really different from the video game cases. Those lacked a social component and came with a warning and a rating system, and only simulated interactions.
Social media technology, according to former employees, is intentionally engineered to capitalize on dependency, unbeknownst to the user, came with no rating system or warnings, and hosts real interactions not simulated ones.
I think they have a much better case here.
It's all about putting things into the juries head that they can remember and draw back to once they're in deliberation. Word-puzzles like those tries to imprint "addicting, brains, children", so those things will be more prominent during the deliberation.
Court is about establishing fact, not discovering truth.
Fact is defined by whatever the jury believes.
That sort of "ABC" simplicity is just good rhetoric.
It's how media works, not a representation of the jury's mental capabilities. For media, you need to have the simplest idea in a visual form if you want to have any chance to make it stick - especially when you're fighting against a trillion dollar attention-addiction industry with billions of lobbying dollars to defend their cash cow.
It is only the opening argument! The technical stuff will come later, I'm sure
"They don't only build apps; they build traps," Lanier said, saying Meta and YouTube pursued "addiction by design," making his arguments using props like a toy Ferrari and a mini slot machine.
These are opening remarks, Perhaps we should wait until they actually present evidence.
Have you ever used YT shorts? No further evidence needed, your honor.
My YT Shorts experience: absent-mindedly watch a few, eventually think "damn, these things suck", tap the "show fewer shorts" link to reduce the chance of absent-mindedly clicking on them again soon. The format, with all its annoying little stylistic cliches, is just too irritating to be addictive. (modern Facebook is even more absurdly un-addictive).
Yes I’m eagerly awaiting the internal emails acknowledging the priorities of revenue over user health too!
One of the emails released from discovery already was about how Facebook deliberately trying to cultivate FOMO in teens, specifically sending mass notifications during school hours to make people not on facebook feel like they're being left out.
This is wild. I would've thought everyone working in the company implicitly knew what they were doing without no-one needing to mention it but I can't imagine someone being so dumb as to suggesting something like sending notifications during school hours in writing. Could you link to this?
Take a look at this (facebook saying teens are top priority): https://storage.courtlistener.com/recap/gov.uscourts.cand.40...
Heavily redacted document talking about the mass notifications: https://storage.courtlistener.com/recap/gov.uscourts.cand.40...
Here's reporting on this and other documents: https://techoversight.org/2026/01/25/top-report-mdl-jan-25/
HN discussion of it: https://news.ycombinator.com/item?id=46902512
It's true but also (could be) innocent. In the sense that if you A/B test things and look for engagement, you will almost certainly end up with "addictive" systems.
I think this may also be why there is so much sugar in American food. People buy more of the sweet stuff. So they keep making it sweeter.
I'm not sure who should be responsible. It kinda feels like a "tragedy of the commons" kind of situation.
Internal memos have shown that they knew children were becoming addicted and didn't take steps to reduce harm where possible, hence the lawsuit.
Obviously the government should be responsible to monitor these patterns and regulate them when they are becoming unhealthy at a statistical level? Having allowed the likes of facebook to grow to this point is clearly a policy failure.
>I'm not sure who should be responsible.
If the lawsuit is about children getting addicted to your apps, who else could be held responsible? The children?
I think it's highly unlikely to be 100% innocent.
The C-suite has learned not to put so much incriminating stuff into writing (after Apple/Google etc. got caught making blatantly illegal anti-poaching agreements in personal emails from folks like Jobs), so proving that is probably gonna be tough.
And also, does it really matter if it was on purpose or not, the end effect is that same, purposefully designing addictive patterns meant to make people spend more time on it, regardless of the mental health of the person.
I can kill a person with a car either intentionally or unintentionally. Of course one is worse than the other, but both are ultimately bad and you should face justice for either of them, even if the punishment might be different because of the motivation/background. But neither should leave you as "innocent".
When you are C-suite, every comment you make is a legal statement. The big concern (usually) is misleading shareholders.
> every comment you make is a legal statement
Every comment people can prove you made.
They learned putting it in email isn't ideal, for that reason.
https://www.reuters.com/article/technology/steve-jobs-told-g...
I'm sure these days when Apple and Google want to set up this sort of clearly illegal deal, their CEOs meet in person, or at least use phone/Signal.
The definition of innocent you two are using here is absurd to me. This is, at best, willful negligence. No one sat down and drew up a plan for the child screen addiction machine, maybe, but they noticed they were making it many times and chose to continue.
I agree, but I think even the parent poster's generous definition of "innocent" is a very low bar Facebook still can't jump over.
We're going to see the same kind of thing we saw with the tobacco industry - CEOs claiming they had no idea the product was engineering to be addictive. I have even less faith that this anyone will hold anybody accountable though.
It's like stoner logic I heard from someone once:
"I smoke weed to get high, it's not the weed that gets me high"
"Social media is addicting, it's not the social media that makes it addictive."
A recommender engine that tries to capture and sustain attention in 1-2 second intervals, what else would you call it?
The traditional answer is "engagement," but there is a strong argument to me made that intentional engagement (engagement by conscious, willful choice) is not possible, repetitively, for a vast smorgasbord of content spinning by at short intervals
Oh no! It shows you stuff you want to see? Another video about subway signaling systems? Call my lawyer!
You make it sound as though it’s completely harmless. Is there not a single personal or societal harm you can imagine to having videos attuned to your interests constantly being fed to you?
“We see that you’re slightly conservative. Next up: a Nazi sympathizer video! Enjoy your ragebait!”
I can imagine harms like that, absolutely. If I ran Youtube I'd work much harder to evict Nazi sympathizers and avoid ideological rabbitholes. But it's legal to broadcast things that will convince people of terrible ideas.
What I don't find plausible is the specific kind of harm alleged in the case discussed in the source article, where having videos attuned to your interests constantly fed to you causes you to become depressed and suicidal.
It might help you understand if you have a preteen or teenage daughter. They are extremely self conscious, prone to humiliation, have a very narrow view of the world, and don’t have all the rational capabilities of an adult.
I'm open to the possibility that I'm missing some key insight and if I ever do have a daughter I'll understand. But I think we also have to be open to the opposite possibility, that we don't like to see preteens and teenagers hurt so deeply and are highly motivated to search around for a structural lever we can pull to stop it from happening.
Those don't sound like opposites? I think those can both be true at once.
> What I don't find plausible is the specific kind of harm alleged in the case discussed in the source article, where having videos attuned to your interests constantly fed to you causes you to become depressed and suicidal.
Why couldn't it?
Start with funny videos, like clips of animals doing silly things.
Then have the occasional cringy video of a person being funny but slightly cringe in there, something akin to you've been framed.
Then have people who are being cringe but its carefully framed, a well edited video of some left wing student pushing for a policy but in a clumsy way, being embarrassing the way all teenagers are.
After a while longer, your feed is nothing but clips of BEN SHAPIRO PUBLICALLY EXECUTES THIS SOCIAL JUSTIC WARRIOR ON THE ALTAR OF FACTS AND LOGIC.
BEN SHAPIRO CLIP COMPILATION - OWNS THIS TRANS ACTIVIST- FACTS DON'T CARE ABOUT YOUR FEELINGS!
Then you suddenly are getting people talking about their concerns™ about trans people in sports, how there might be unaddressed issues, and then Helen Joyce and the like are appearing in your feed, sounding calm and reasonable while they politely and civilly discuss how all trans people are inherently vile sexual predators engaged in a global conspiracy to sterilize your children.
More and more right wing content, drip drip drip, absolutely no one step that is that distinguishable from the previous, until eventually your feed is nothing but Q-Anon if you are lucky, and outright Nazism if not.
Not sure I follow. I don't think that turning someone into a Ben Shapiro or Helen Joyce fan causes them to become depressed and suicidal.
I mean I want to feel good all the time, that doesn't mean shooting up heroin is a good idea.
If you are somebody who does not have the willpower to stop themselves during an onslaught of 'things you want to see', it can be very damaging.
This is just one data point, but I once talked to a person who worked for YouTube. I asked him if he had children, he said yes. I asked him if he let them use YouTube Kids and he said "no way, that's completely banned at home". That told me everything that I needed to know.
Don't consume your own product.
This is weird because ... at which point does an addiction start?
People used to be addicted to watching TV, right? Well, nobody was made responsible for that. If it is addiction, and I am not necessarily saying this is not, then all websites would fall under the same category IF they are designed well enough to become addictive. Most games would fall under that category too. I don't think this is a good category at all. Both Meta and Google should pay simply for wasting our time here, but the "you designed your applications and websites in an addictive manner" ... that's just weird.
The interest of corporations, the interest of capital is above the interest of society at large. Nothing will pan out of this trial.
Any tech company funded primarily by ad money is dirty.
Ignoring what I think of the case itself, I hate how many headlines now are just the talking points for one side or the other in a dispute. At least try to pretend you’re being a neutral reporter rather than regurgitating what someone with an agenda says.
"I let an iPad raise my kid and now she sucks" is a wild lawsuit premise.
If engineering addiction for children is illegal, it should obviously be illegal to target adults too?
That’s the thing that is so ridiculous. Why is it magically OK to drive harmful compulsive behavior on your 18th birthday.
The interns at work talk incessantly about gambling. It’s just weird and wrong.
The answer to that is 'it depends'. We know certain substances like alcohol and tobacco can be/are addictive and yet the product still sells with warnings.
In the case of alcohol and tobacoo, I do not think these products are engineered for addiction, they are engineered for taste and addiction is a side-effect. If they could be produced to have the same short-term (< a few hours) effects while avoiding the long term effects (addiction), the companies would (hopefully) choose to do that. I do not imagine these companies to build these products in a way that maximises their addiction.
social media platforms however, ...
>obacoo, I do not think these products are engineered for addiction,
Cigarettes were 100% engineered for addiction.
>I do not imagine these companies to build these products in a way that maximises their addiction.
Holy hell, you need to go watch the big tobacco trials of the past to see the CEOs knew exactly what they were doing. And then how much they spent on outright buying scientists to say bullshit so they could drag the matter on for years.
Tobacco set the stage for companies to use doubt of every kind to hide their intentional actions.
It’s a good point, but crimes against children are a higher priority because children are more vulnerable and the law exists in part to protect our most vulnerable.
Is this a replay of the comics and video games are doing irreparable harm arguments from not too long ago?
I find myself in the uncomfortable position of sympathizing with both sides of the argument - a yes-but-no position.
There is no law that dictates these two things:
1) You can't stalk someone deliberately and persistently, using any means, or medium; even if you're a company, and even if you have good intentions.
2) You can't intentionally influence any number of people towards believing something false and that you know is against their interest.
These things need to be felony-level or higher crimes, where executives of companies must be prosecuted.
Not only that, certain crimes like these should be allowed to be prosecuted by citizens directly. Especially where bribery and threats by powerful individuals and organizations might compromise the interests of justice.
The outcome of this trial won't amount to anything other than fines. The problem is, this approach doesn't work. They'll just find different ways that can skirt the law. Criminal consequence is the only real way to insist on justice.
The probably with 2 is you then need someone to be the arbiter of truth, and the truth is often a hard thing to find. This would end up letting governments jail people they disagree with. How would you write the law to to prevent that?
Listen, it sounds a lot less evil when you label it "audience engagement" mmk?
People aren't customers anymore. They are the resource to be mined. Advertisers are the customers.
[deleted]
All of these things they're saying are unethical, but not illegal, right?
No TikTok?
Is it really landmark tho? First it was nicotine and big tobacco, then the same addiction engineers designed ultra processed foods. Kraft, nabisco, etc all spun off by tobacco companies... Normalizing food addiction in children. And now it's screens and social media. It's the same fundamental physiology but it seems like society can't learn a lesson either.
I thought it was kind of pathetic how quickly they shoved ipads into schools with no real long term data, no research whatsoever. Just insane really. And now here we are yet again.
I am generally displeased with the way social media has evolved, but I'm not in favor of this lawsuit. It seems like a way to blame tech companies for Congress' failure to regulate businesses properly. None of the engineers involved thought of their work as a way to rot the minds of future generations. Their thought process was straightforward-
1. We sell ads to make money 2. If we keep eyeballs on our apps more than competing apps, we can increase the price for our ads and make more money 3. Should we implement limits to kick kids off the app after they've been doomscrolling for an hour? Absolutely not, that would violate our duty to our shareholder. If parents complain, we'll say they should implement the parental controls present on their phones and routers. We can't make choices to limit our income if parents don't use the tools they already have.
I'm sorry that social media has ruined so many kids' lives, but I don't think the responsibility lies with the tech companies in this case. It lies with the society that has stood by idly while kids endured cyber-bullying and committed suicide. This isn't something that happened recently- the USA has had plenty of time to respond as a society and chosen not to. Want to sue someone? Sue Congress.
Google and Meta are rational actors in a broken system. If you want to change something you should change the rules that they operate under and hold them accountable for those rules going forward. Australia (and Spain) is doing something about it- now that social media is banned for kids under 16 in those countries, if social media companies try to do anything sneaky to get around that you actually have a much stronger case.
Now if there were evidence that they were intentionally trying to get kids bullied and have them commit suicide then by all means, fine them into oblivion. But I doubt there is such evidence.
That seems like a really bad excuse to void responsibility. Consider a cigarette maker:
1. We sell cigarettes to make money
2. The more people crave cigarettes, the more money we can make
3. Should we make cigarettes less appealing to children? Absolutely not, we would make less money. Parents should just stop their kids from buying cigarettes.
Also, people in there last few decades have been using “duty to shareholders” as a way to excuse bad behavior, as if it’s a moral imperative higher than all others. I don’t really see why it would.
Curious how this is gonna turn out, but I'm not holding my breath.
I'd argue that we basically incentivise companies to cause harm whenever it is unregulated and profitable because the profits are never sufficiently seized and any prosecution is a token effort at best.
See leaded gas, nicotine, gambling, etc. for prominent examples.
I personally think prosecution should be much harsher in an ideal world; if a company knows that its products are harmful, it should ideally be concerned with minimising that harm instead of fearing to miss out on profits without any legal worries.
"But if we don't engineer addiction, China will beat us to it! It's a national interest!"
Just ban personalized advertising and be done with it, they will do any amount of harm so long as it gets a click.
They will have personalized feeds for engagement and non-personalized ads, still get the addictive effect.
It’s a matter of time, it could be decades, until we shape those "engineered addiction" similarly to the Tobacco trials.
That's optimistic! I worry the attack on discourse and community has fundamentally undermined democracy.
I mean everyone knows this right? There are even leaked memos. They are public companies who need to grow revenue and they gain that revenue mostly through ads and attention.
Related:
Unsealed court documents show teen addiction was big tech's "top priority"
Anything where you scroll through posts or endlessly watch short videos is highly addictive.
If you think its not and just "similar to addiction", just try blocking these sites in your browser/phone and see how long you last before feeling negative effects.
I really have never understood this. Short form videos are no more or less interesting to me than any other. I also do not find the YouTube main page very compelling, and rarely follow the things that are suggested at the ends of videos. I certainly do not feel ill if I forget to use YouTube for a week.
No shit
This is why we have juries.
What's amazing here is the Google and Facebook lawyers think they have a chance to persuade members of the public otherwise.
The jury is using words outside of their medical context in situations that do not justify the word then. In fact, most of society seems okay with this gross mis-use of the term to apply to things that don't actually manipulate incentive salience. We're going to end up with authoritarians in control of all 'screens' just because our schools have done such a bad job of explaining neuroscience. If you think handing the federal government control of all screens is a good idea in the USA you really need to look around.
I am not saying that Facebook didn't try. I am just saying that only having access to screens, they would inevitably fail. Screens are very unlike addictive drugs and cannot directly alter neurochemistry (at least not any more than a sunset or any perception does). I strongly dislike the company and have personally never created a Facebook account nor used the website.
>The jury is using words
How do you know what the jury are saying?
>outside of their medical context
That's because medicine doesn’t own the language. People do. If the jury hear words used wrongly then, as speakers of the language, they can interpret it as they wish. They are about to hear from another attorney who will say the opposite to the first one, and they will decide which was most persuasive.
> The jury is using words
Mostly not. That's the lawyers' job. The jury listens.
> outside of their medical context
Well, sure. It's a legal context now. The defense get to make the medical argument, if they like. I think it's a losing one.
> Screens are very unlike addictive drugs and cannot directly alter neurochemistry
Screens are able to show you things which give you small short dopamine hits, enough to keep you engaged enough to try get more. This is exactly how addictive drugs, gambling etc all work.
or tv which was fine for years.
TV is not 100% curated for your personal tastes based on your previous interactions.
It was fine for years because it was a generic service in which everybody was forced to view the same content in the same way.
They are very very different things.
[dead]
[flagged]
The classic defense used by cigarette companies historically. We are just rolling up some naturally growing leaves, leaves can't be addictive. Users are responsible for diseases and addiction. We don't target minors.
> People are responsible for their choices
Yes, but also often are not actively in control of them.
People are also in control of whether or not they gamble their last dollar instead of buying some food, or whether or not they put that needle into their veins.
History has shown human nature can be dangerously out of control and people will happily destroy their own lives given the opportunity.
Therefore society tries to protect those vulnerable people with legislation.
So when does it stop? Would you support the state intervening in other situations in which people make choices of which you disapprove? Which choices would you, in your magnanimity, allow them to make?
>Which peer choices would you, in your magnanimity, allow them to make?
Thats why we have voting and a legal system which builds upon previous laws. You and I do not need to decide where the line in the sand is drawn, the majority of the population does that via the political and legal framework society has created.
> Would you support the state intervening in other situations in which people make choices of which you disapprove?
Do you think "murder should be legal" is a popular position?
> People are responsible for their choices,
It clearly does not work. Nearly half the country lacks the self stewardship to not eat themselves into obesity. And I have to pay the externalities for you fat fucks.
"It doesn't work", therefore, tyranny? Because the most important thing in the world is the state ensuring people not make bad choices? So any time the system tries something to stop them and it "doesn't work", that's a justification for increasing state power until it works?
We're talking about words and images. Addiction cannot enter the body through the eyes and ears.
Enjoy your doubleplus goodthink.
> Because the most important thing in the world is the state ensuring people not make bad choices
So you think it's fine to cheat or con somebody? Just a bad choice by the person conned?
The key thing you are missing here is that companies like Google and facebook aren't some passive actor just 'placing choices' before people.
The allegation is they are were actively working to knowingly harm for their own interests.
You pretend to be advocating for a world full of individual responsibility - yet somehow that doesn't extend to the actions of large companies or the people who run them.....
>Addiction cannot enter the body through the eyes and ears.
Please tell me what planet your from, 'cause it ain't earth.
-Addiction is a reinforcement-learning pathology driven by dopaminergic reward circuits. Sensory input alone can trigger it.
Please for god sake learn about how addiction actually works.
> It's not possible to be "addicted" to words…
Never met a narcicist?
The state of being a narcissist, in itself, is not something that creates civil or criminal exposure. Nor is being a braggart.
Do you think it’s better to treat a symptom of a disease or treat its root cause?
You've moved the goalpost so far the sport changed.