Honestly I have nothing to add
this post is no man’s land
First of all, intellectual property rights do not protect the author. I’m the author of a few papers and a book and I do not have intellectual property rights on any of these - like most of the authors I had to give them to the publishing house.
Secondly, your personal carbon footprint is bullshit.
Thirdly, everyone in the picture is an asshole.
I would not to get close to bike repaired by someone who is using ai to do it. Like what the fuck xd I am not surprised he is unable to make code work then xddd
They only real exception I can think of would be to train an AI ENTIRELY on your own personally created material. No sources from other people AT ALL. Used purely for personal use, not used or available for use by the public.
I think the public domain would be fair game as well, and the fact that AI companies don’t limit themselves to those works really gives away the game. An LMM that can write in the style of Shakespeare or Dickens is impressive, but people will pay for an LLM that will write their White Lotus fan fiction for them.
This is a very IP-brained take. This is not the reason that AI is harmful.
Possibly, but the intention behind it is more about not exploiting other people. If it’s only trained on my work, and only used by me, I’m the only one harmed by it, and that’s my choice to make.
That’s very deontological. Suppose you train a model that is equally good as other models, but only using your own work. (If you were a billionaire, you could commission many works to achieve this, perhaps.) Either way, you end up with an AI that allows you to produce content without hiring artists. If the end result is just as bad for artists, why is using one of those ethical?
True, but that’s why I specified that it could only be used for my own personal use. Once you start publishing the output you’ve entered unethical territory.
I don’t see the relevance of its personal use here. If it is ethical to use your own AI for personal use, why is it unethical to use an AI trained on stolen data for personal use?
That’s what my workplace does since 1985!
So I’ll be honest. I use GPT to write Python scripts for my research. I’m not a coder and I don’t want to be one, but I do need to model data sometimes and I find it incredibly useful that I can tell it something in English and it can write modeling scripts in Python. It’s also a great way to learn some coding basics. So please tell me why this is bad and what I should do instead.
I think sometimes it is good to replace words to reevaluate a situation.
Would “I don’t want to be one” be a good argument for using ai image generation?
Didn’t you read the post? You’re bad and should feel bad.
I’d say the main ethical concern at this time, regardless of harmless use cases, is the abysmal environmental impact necessary to power centralized, commercial AI models. Refer to situations like the one in Texas. A person’s use of models like ChatGPT, however small, contributes to the demand for this architecture that requires incomprehensible amounts of water, while much of the world does not have enough. In classic fashion, the U.S. government is years behind on accepting what’s wrong, allowing these companies to ruin communities behind a veil of hyped-up marketing about “innovation” and beating China at another dick-measuring contest.
The other concern is that ChatGPT’s ability to write your Python code for data modeling is built on the hard work of programmers who will not see a cent for their contribution to the model’s training. As the adage goes, “AI allows wealth to access talent, while preventing talent from accessing wealth.” But since a ridiculous amount of data goes into these models, it’s an amorphous ethical issue that’s understandably difficult for us to contend with, because our brains struggle to comprehend so many levels of abstraction. How harmed is each individual programmer or artist? That approach ends up being meaningless, so you have to regard it more as a class-action lawsuit, where tens of thousands have been deprived as a whole.
By my measure, this AI bubble will collapse like a dying star in the next year, because the companies have no path to profitability. I hope that shifts AI development away from these environmentally-destructive practices, and eventually we’ll see legislation requiring model training to be ethically sourced (Adobe is already getting ahead of the curve on this).
As for what you can do instead, people have been running local Deepseek R1 models since earlier this year, so you could follow a guide to set one up.
I sure am glad that we learned our lesson from the marketing campaigns in the 90’s that pushed consumers to recycle their plastic single-use products to deflect attention away from the harm caused by their ubiquitous use in manufacturing.
Fuck those AI users for screwing over small creators and burning down the planet though. I see no problem with this framing.
I work at a company that uses AI to detect repirstory ilnesses in xrays and MRI scans weeks or mobths before a human doctor could.
This work has already saved thousands of peoples lives.
But good to know you anti-AI people have your 1 dimensional, 0 nuance take on the subject and are now doing moral purity tests on it and dick measuring to see who has the loudest, most extreme hatred for AI.
Nobody has a problem with this, it’s generative AI that’s demonic
Generative AI uses the same technology. It learns when trained on a large data set.
It’s almost like it isn’t the “training on a large data set” part that people hate about generative AI
ICBMs and rocket ships both burn fuel to send a payload to a destination. Why does NASA get to send tons of satellites to space, but I’m the asshole when I nuke Europe??? They both utiluze the same technology!
So would you disagree with the OP about there being no exceptions?
Nope, all generative AI is bad, no exceptions. Something that uses the same kind of technology but doesn’t try to imitate a human with artistic or linguistic output isn’t the kind of AI we’re talking about.
Generative AI is a meaningless buzzword for the same underlying technology, as I kinda ranted on below.
Corporate enshittification is what’s demonic. When you say fuck AI, you should really mean “fuck Sam Altman”
I mean, not really? Maybe they’re both deep learning neural architectures, but one has been trained on an entire internetful of stolen creative content and the other has been trained on ethically sourced medical data. That’s a pretty significant difference.
No, really. Deep learning and transformers etc. was discoveries that allowed for all of the above, just because corporate vc shitheads drag their musty balls in the latest boom abusing the piss out of it and making it uncool, does not mean the technology is a useless scam
Yeah, that’s not what I was disagreeing with. You’re right about that; I’m on record saying that capitalism is our first superintelligence and it’s already misaligned. I’m just saying that it isn’t really meaningless to object to generative AI. Sure the edges of the category are blurry, but all the LLMs and diffusion-based image generators and video generators were unethically trained on massive bodies of stolen data. Seriously, talking about AI as though the architecture is the only significant element when getting good training data is like 90% of the challenge is kind of a pet peeve of mine. And seen in that light there’s a pretty significant distinction between the AI people are objecting to and the AI people aren’t objecting to, and I don’t think it’s a matter of “a meaningless buzzword.”
I totally understand that. The pet peeve of yours, i just disagree with on a fundamental level. The data is the content, and speaking about it as if the data is the technology itself is like talking about clothes in general as being useful or not. It’s meaningless especially if you don’t know about or acknowledge the different types of apparel and their uses. It’s obviously not general knowledge but it would be like bickering about if underwear is a great idea or not, it’s totally up to the individual if they want to wear them, even if being butt naked in public is illegal. If the framework is irrelevant, then the immediate problem isn’t generative AI, especially the perfectly ethical open source models
This.
I recently attended a congress about technology applied on healthcare.
There were works that improved diagnosis and interventions with AI, generative mainly used for synthetic data for training.
However there were also other works that left a bad aftertaste in my mouth, like replacing human interaction between the patient and a specialist with a chatbot in charge of explaining the procedure and answering questions to the patient. Some saw privacy laws as a hindrance and wanted to use any kind of private data.
Both GenAI, one that improves lives and other that improves profits.
I think DLSS/FSR/XeSS is a good example of something that is clearly ethical and also clearly generative AI. Can’t really think of many others lol
Generative AI is a meaningless buzzword for the same underlying technology
What? An AI that can “detect repirstory ilnesses in xrays and MRI scans” is not generative. It does not generate anything. It’s a discriminative AI. Sure, the theories behind these technologies have many things is common - but I wouldn’t call them “the same underlying technology”.
It is litterally the exact same technology. If i wanted to i could turn our xray product into a image generator in less than a day.
Because they are both computers and you can install different (GPU-bound) software on them?
It’s true that generative AI is uses discriminative models behind the scenes, but the layer needed on top of that is enough to classify it as a different technology.
No, I mean fuck AI. You can be included in that, if you insist.
-
Except clearly some people do. This post is very specifically saying ALL AI is bad and there is no exceptions.
-
Generative AI isnt a well defined concept and a lot of the tech we use is indistinguishable on a technical level from “Generstive AI”
-
sephirAmy explicitly said generative AI
-
Give me an example, and watch me distinguish it from the kind of generative AI sephirAmy is talking about
Again. Geneative AI is a meaningless term.
-
-
All this is being stoked by OpenAI, Anthropic and such.
They want the issue to be polarized and remove any nuance, so it’s simple: use their corporate APIs, or not. Anything else is ”dangerous.”
For what they’re really scared of is awareness of locally runnable, ethical, and independent task specific tools like yours. That doesn’t make them any money. Stirring up “fuck AI” does, because that’s a battle they know they can win.
And that AI has been trained on data that has been stolen, taking away the livelihood of thousands more. Further, the environmental destruction will have the capacity to destroy millions more.
I’m not lost on the benefits; it can be used to better society. However, the lack of policy around it, especially the pandering to corporations by the American judicial system, is the crux here. For me, at least.
No. Im also part of the ethics committee at my work and since we work with peoples medical data as our training sets 9/10ths of our time is about making sure that data is collected ethically and with very specific consent.
I’m fine with that. My issue is primarily theft and permissions and the way your committee is running it should be the absolute baseline of how models gather data. Keep up the great work. I hope that this practice becomes mainstream.
nobody is trashing Visual Machine Learning to assist in medical diagnostics
cool strawman though, i like his little hat
No, when you litterally say “Fuck AI, no exceptions” you are very very expliticly covering all AI in that statement.
what do you think visual machine learning applied to medical diagnostics is exactly
does it count as “ai” if i could teach an 11th grader how to build it, because it’s essentially statistically filtering legos
don’t lose the thread sportschampion
Well most of my colleagues have PHDs or MDs, so good luck teaching an 11th grader to do it.
They’re not even people. Who knows if that story was true. They’re not conscious anymore.
Those are not GPTs or LLMs. Fuck off with your bullshit trying to conflate the two.
We actually do use Generative Pre-trained Transformers as the base for a lot of our tech. So yes they are GPTs.
And even if they werent GPTs this is a post saying all AI is bad and how there is literally no exceptions to that.
Again with the conflation. They clearly mean GPTs and LLMs from the context they provide, they just don’t have another name for it, mostly because people like you like to pretend that AI is shit like chatGPT when it benefits you, and regular machine learning is AI when it benefits you.
And no, GPTs are not needed, nor used, as a base for most of the useful tech, because anyone with any sense in this industry knows that good models and carefully curated training data gets you more accurate, reliable results than large amounts of shit data.
Our whole tech stack is built off of GPTs. They are just a tool, use it badly and you grt AI slop, use it well and you can save peoples lives.
As I said, anyone with sense.
Frfr
I believe AI is going to be a net negative to society for the forseeable future. AI art is a blight on artistry as a concept, and LLMs are shunting us further into search-engine-overfit post-truth world.
But also:
Reading the OOP has made me a little angry. You can see the echo chamber forming right before your eyes. Either you see things the way OOP does with no nuance, or you stop following them and are left following AI hype-bros who’ll accept you instead. It’s disgustingly twitter-brained. It’s a bullshit purity test that only serves your comfort over actually trying to convince anyone of anything.
Consider someone who has had some small but valued usage of AI (as a reverse dictionary, for example), but generally considers things like energy usage and intellectual property rights to be serious issues we have to face for AI to truly be a net good. What does that person hear when they read this post? “That time you used ChatGPT to recall the word ‘verisimilar’ makes you an evil person.” is what they hear. And at that moment you’ve cut that person off from ever actually considering your opinion ever again. Even if you’re right that’s not healthy.
I’m a what most people would consider an AI Luddite/hater and think OOP communicates like a dogmatic asshole.
You can also be right for the wrong reasons. You see that a lot in the anti-AI echo chambers, people who never gave a shit about IP law suddenly pretending that they care about copyright, the whole water use thing which is closer to myth than fact, or discussions on energy usage in general.
Everyone can pick up on the vibes being off with the mainstream discourse around AI, but many can’t properly articulate why and they solve that cognitive dissonance with made-up or comforting bullshit.
This makes me quite uncomfortable because that’s the exact same pattern of behavior we see from reactionaries, except that what weirds them out for reasons they can’t or won’t say explicitly isn’t tech bros but immigrants and queer people.
Out of curiosity, could you link a source vis-a-vis AI’s water consumption?
It’s not that the datacenters don’t “use” water (you’ll find plenty of sources confirming that), but rather that the argument stretches the concept of “water usage” well beyond the limit of meaninglessness. Water is not electricity, it can’t usually be transported very far and the impact of a pumping operation is fundamentally location-dependent. Saying “X million litres of water used for Y” is usually not useful unless you’re defining the local geographic context.
Pumping aquifers in a dry area and discharging the water in a field: very bad.
Pumping from and subsequently releasing water to a lake/river: mostly harmless, though sometimes in summer the additional heat pumped into the water can be harmful depending on the size of the body of water.
The real problem is that lots of areas (especially in the US) haven’t updated their water rights laws since the discovery of water tables. This is hardly a new problem, and big ag remains by far the worst offender here.
Then there’s the raw materials in the supply chain… and like not to downplay it but water use is not exactly at the top of the list of environmental impacts there. Concrete is hella bad on CO2 emissions, electronics use tons of precious metals that often get strip mined and processed with little to no environmental regulation, etc.
Frankly putting “datacenter pumped water out of the river then back in” in the same aggregate figure as “local lake polluted for 300 years in China by industrial byproducts” rubs me the wrong way. These are entirely different problems that do not benefit anyone from being bastardized like this. It feels the same way to me as saying “but there are children starving in Africa!” when someone throws away some food – sure throwing away food isn’t great, and it’s technically on-topic, but we can see how bundling these things together isn’t useful, right?
The people who hate immigrants and queer people are AI’s biggest defenders. It’s really no wonder that people who hate life also love the machine that replaces it.
A perfect example of the just completely delusional factoids and statistics that will spontaneously form in the hater’s mind. Thank you for the demonstration.
(as a reverse dictionary, for example)
Thanks for putting a name on that! That’s actually one of the few useful purposes I’ve found for LLMs. Sometimes you know or deduce that some thing, device, or technique must exist. The knowledge of this thing is out there, but you simply don’t know the term to search for. IMO, this is actually one of the killer features of LLMs. It works well because whatever the LLM is outputting is simply and instantly verifiable. You can describe the characteristics of something to the LLM and ask it what thing has those characteristics. Then once you have a possible name, you then look that name up in a reliable source and confirm it. Sometimes the biggest hurdle to figuring something out is just learning the name of a thing. And I’ve found LLMs very useful as a reverse dictionary. Thanks for putting a name on it!
Using chatGPT to recall the word ‘verisimilar’ is an absurd waste of time, energy, and in no way justifies the use of AI.
90% of LLM/GPT use is a waste or could be done with better with another tool, including non-LLM AIs. The remaining 10% are just outright evil.
Where is your source? It sounds unbelievable
Source is the commercial and academic uses I’ve personally seen as an academic-adjacent professional that’s had to deal with this sort of stuff at my job.
What was the data you saw on what volume of requests to non-llm models as they relate to utility? I can’t figure out what profession have access to this kind of statistic? It would be very useful to know, thx.
I think you’ve misunderstood what I was saying- I don’t have spreadsheets of statistics on requests for LLM AIs vs non-LLM AIs. What I have is exposure to a significant amount of various AI users, each running different kinds of AIs, and me seeing what kind of AI they’re using, and for what purposes, and how well it works or doesn’t.
Generally, LLM-based stuff is really only returning ‘useful’ results for language-based statistical analysis, which NLP handles better, faster, and vastly cheaper. For the rest, they really don’t even seem to be returning useful results- I typically see a LOT of frustration.
I’m not about to give any information that could doxx myself, but the reason I see so much of this is because I’m professionally adjacent to some supercomputers. As you can imagine, those tend to be useful for AI research :P
I feel this way about people who eat meat.
I’m just sick of all this because we gave to “AI” too much meaning.
I don’t like Generative AI tools like LLMs, image generators, voice, video etc because i see no interests in that, I think they give bad habits, and they are not understood well by their users.
Yesterday again i had to correct my mother because she told me some fun fact she had learnt by chatGPT, (that was wrong), and she refused to listen to me because “ChatGPT do plenty of researches on the net so it should know better than you”.
About the thing that “it will replace artists and destroy art industry”, I don’t believe in that, (even if i made the choice to never use it), because it will forever be a tool. It’s practical if you want a cartoony monkey image for your article (you meanie stupid journalist) but you can’t say “make me a piece of art” and then put it on a museum.
Making art myself, i hate Gen AI slop from the deep of my heart but i’m obligated to admit that. (Let’s not forget how it trains on copirighted media, use shitton of energy, and give no credits)
AI in others fields, like medecine, automatic subtitles, engineering, is fine for me. It won’t give bad habits, it is well understood by its users, and it is truly benefical, as in being more efficient to save lifes than humans, or simply being helpful to disabled people.
TL,DR AI in general is a tool. Gen AI is bad as a powerful tool for everyone’s use like it is bad to give to everyone an helicopter (even if it improves mobility). AI is nonetheless a very nice tool that can save lifes and help disabled peoples IF used and understood correctly and fairly.
AI in others fields, like medecine, automatic subtitles, engineering, is fine for me. It won’t give bad habits, it is well understood by its users, and it is truly benefical, as in being more efficient to save lifes than humans, or simply being helpful to disabled people.
I think the generative AI tech bros have deliberately contributed to a lot of confusion by calling all machine learning algorithms “AI”.
I mean, you have some software which both works and is socially beneficial, like translation and speech recognition software.
You have some software that works, and is incredibly dangerous because it works, like facial recognition and all the horrible ways authoritarian governments can exploit it.
And then you have some software that “works” to produce socially detrimental bullshit, like generative AI.
All three of these categories use machine learning algorithms, trained on data sets to recognize and produce patterns. But they aren’t the same in any other meaningful sense. Calling them all “AI” does nothing but confuse the issue.
I spent an hour talking photographs on the drive home the other night (the wife was driving and a storm have us great clouds). I was mostly playing with angles and landscape but it was fun. The kind of stuff it would take entire weeks to do thirty years ago, and I was done in an hour. I got a mediocre shot at best, but it was real dammit.
My issues are fundsmentally two fold with gen AI:
-
Who owns and controls it (billionares and entrenched corporations)
-
How it is shoehorned into everything (decision making processes, human-to-human communication, my coffee machine)
I cannot wait until finally the check is due and the AI bubble pops; folding this digital snake oil sellers’ house of cards.
When generative AI was first taking off, I saw it as something that could empower regular people to do things that they otherwise could not afford to. The problem, as is always the case, is capitalism immediately turned into a tool of theft and abuse. The theft of training data, the power requirements, selling it for profit, competing against those whose creations were used for training without permission or attribution, the unreliability and untrustworthiness, so many ethical and technical problems.
I still don’t have a problem with using the corpus of all human knowledge for machine learning, in theory, but we’ve ended up heading in a horrible, dystopian direction that will have no good outcomes. As we hurtle toward corporate controlled AGI with no ethical or regulatory guardrails, we are racing toward a scenario where we will be slavers or extinct, and possibly both.
When generative AI was first taking off, I saw it as something that could empower regular people to do things that they otherwise could not afford to.
Except, of course, you aren’t doing anything. You are no more writing, making music, or producing art than is an art director at an ad agency is. You’re telling something else to make (really shitty) art on your behalf.
yes, it’s just as bad as being a director
You really take no issue with how they were all trained?
*Not op but still gonna reply. Not really? The notion that someone can own (and be entitled to control) a portion of culture is absurd. It’s very frustrating to see so many people take issue with AI as “theft” as if intellectual property were something that we should support and defend instead of being the actual tool for stealing artists work (“Property is theft” and all such). And obviously data centers are not built to be environmentally sustainable (not an expert, but I assume this could be done if they cared to do so). That said, using AI to do art so humans can work is the absolute peek of a stupid fucking ideas.
eh, i’ll reply too. the only reason why intellectual property exists for art is because it’s essentially the only way for artists to make money under this capitalist system. while i agree that a capitalist economic system is bad and that artists should be able to make a livable wage, intellectual property on art is more of a symptom of this larger problem
I just don’t think that intellectual property really achieves that. It seems to me that it is a much better tool for corporate control of art and culture than for protecting small artist. Someone who is trying to pay bills with their art probably can’t afford lawyers to protect that work. That said, I don’t necessarily have a better solution other than just asking people to support artist directly instead of going through corporate middlemen
yeah i definitely agree, it’s not the best solution, and the law is insanely biased towards the rich. hopefully one day artists will be guaranteed a livable wage for their art
Solving points 1 and 2 will also address many ethical problems people create with AI.
I believe that information should be accessible to all. My issue is not with them training in the way they did, but their monopoly on this process. (In the very same vein as Sci-Hub makes pay-walled whitepapers accessible, cutting out the profiteering publishers.)
It must be democratized and distributed, not centralized and monetized!
The way they were trained is the way they were trained.
I dont mean to say that the ethics dont matter, but you are talking as though this isnt already present tense.
The only way to go back is basically a global EMP.
What so you actually propose that is a realistic response?
This is an actual question. To this point the only advice I’ve seen to come from the anti-ai crowd is “dont use it. Its bad!” And that is simply not practical.
You all sound like the people who think we are actually able to get rid of guns entirely.
I’m not sure your “this is the present” argument holds much water with me. If someone stole my work and made billions off it, I’d want justice whether it was one day or one decade later.
I also don’t think “this is the way it is, suck it up” is a good argument in general. Nothing would ever improve if everyone thought like that.
Also, not practical? I don’t use genAI and I’m getting along just fine.
Okay, you know those gigantic data centers that are being built that are using all our water and electricity?
Stop building them.
Seems easy.
Just like how not buying guns is easy. For the people who get it.
Guns can be concealed and smuggled.
Compute warehouses the size of football fields that consume huge amounts of electricity and water absolutely can’t. They can all be found extremely easily and shut down, and it would be extremely easy to prevent more from being built.
This isn’t hard.
It’s a weird argument to say “we could just stop doing popular things”. It shows a lack of awareness. And no, explaining this doesn’t mean I’m taking sides I just recognize the current reality
The right thing isn’t always popular. Something being popular is not itself a good argument for a thing to be done.
It’s not “popular” organically, it’s being forced on us by people who are invested in the technology. The chatbots are being shoved into everything because they want to make them profitable despite being money holes, not because people want it.
Yeah, they shouldn’t be popular. Tell all your friends.
To this point the only advice I’ve seen to come from the anti-ai crowd is “dont use it. Its bad!” And that is simply not practical.
I’d argue it’s not practical to use it.
Your argument is invalid, the capitalists are making money. It will continue for as long as there is money to be made. Your agreement and my agreement is unnecessary.
How do we fix the problem that makes AI something that we have to deal with.
Sabotage, public outrage, I dunno.
If you’re arguing that people shouldn’t be upset because there’s no escaping it, this is an argument in favor of capitalism. Capitalism can’t be escaped either.
I appreciate you taking my question a face value, you’re the only one who did. Your capitalism quote worked perfectly. I was trying to use guns as my exams of shit I can’t get away from.
I guess. I’m still anti-capitalism, though, but in the plant a fig tree for your grandchildren kind of sense.
… the capitalists are making money. It will continue for as long as there is money to be made.
Nah these companies don’t even make money on the whole, they burn money. So your argument is invalid, and may God have mercy on your soul! 🙏
Deranged
-
AI is a marketing term. Big Tech stole ALL data. All of it. The brazen piracy is a sign they feel untouchable. We should touch them.
I have same rant with "this is the only funny AI meme’ shit.
Are people expected not to follow anyone they disagree with?
Reading other opinions? On my echo chamber platform of choice?! /s
Follow to expose yourself to different perspectives? Sure.
But it sounds like the users in question are following with the intent to reply “you’re wrong” to everything the OP puts out.
Which… I do, sadly, expect. But I wouldn’t wish for it.
Well deserved. The OOP is wrong, and it sounds like they know it and are just trolling.
Why would you follow someone you disagree with?
Edit: I’m convinced, guys. I should follow racist, Nazi, psychopaths because even if I disagree their words hold value.
I’m not saying that we should rage-follow but it’s also unreasonable to believe it’s possible to agree with every single opinion of another person let alone another community as a whole.
AI is whatever, but man, has social media been mind poison.
I say we burn it all down, honestly. Including this place.
I tend to agree. Mass social media was a mistake. I had way better conversations and learned way more shit from random people when I was posting on a niche metal band’s fan-run message board back in the 00’s. Now it’s all just who can post the fastest bullshit to get the most views and clicks.
Talk about AI dumbing people down, but at least it has the ability to teach you what you want to know, if you tell it to. Social media, especially with the TikTok style of content being pushed everywhere else, is just 90% pure brain rot.
Get rid of votes and worthless Internet Points and a lot of that would vanish. Of all the things to copy from Reddit and Twitter and their ilk, voting was the dumbest thing that Lemmy copied.
Yes, that’s well said. I’d also take ai over social media any day.
A while ago someone launched a social media where all the people except the user are ai. I thought it was stupid when I heard of it (still do, I wouldn’t use it), but people who have, have noted how different it was because “people” on it were not mainly assholes like on normal social media. The difference shows how toxic social media is.
You follow them because you’re interested in their posts and you generally agree on most things. If I follow someone and they start saying FF14 is a good game im not going to unfollow just because I disagree.
I should follow racist, Nazi, psychopaths
False equivalency and strawman, nice
Occasional disagreement isn’t a bad thing. Provided that the opinions expressed aren’t toxic or dangerous, what’s wrong with hearing an opinion that differs from your own? You don’t have to endorse it, share it, or even comment about it.
No two people are going to agree 100% on everything. Listening to those who disagree with you means having opportunities to learn something new, and to maybe even improve yourself based on new information.
keeps you informed, and it shows open-mindedness
Coming up with a genuinely original idea is a rare skill, much harder than judging ideas is. Somebody who comes up with one good original idea (plus ninety-nine really stupid cringeworthy takes) is a better use of your reading time than somebody who reliably never gets anything too wrong, but never says anything you find new or surprising. Alyssa Vance calls this positive selection – a single good call rules you in – as opposed to negative selection, where a single bad call rules you out. You should practice positive selection for geniuses and other intellectuals.
I think about this every time I hear someone say something like “I lost all respect for Steven Pinker after he said all that stupid stuff about AI”. Your problem was thinking of “respect” as a relevant predicate to apply to Steven Pinker in the first place. Is he your father? Your youth pastor? No? Then why are you worrying about whether or not to “respect” him? Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
Yes. And. The worst-case scenario is: the black box is creating arguments deliberately designed to make you believe false things. 100% of the arguments coming out of it are false - either containing explicit falsehoods, or presenting true facts in such a way as to draw a false conclusion. If you, personally, cannot reject one of its arguments is false, it’s because you lack the knowledge rhetorical skill to see how it is false.
I’m sure you can think of individuals and groups whom this applies to.
(And there’s the opposite issue. An argument that is correct, but that looks incorrect to you, because your understanding of the issue is limited or incorrect already.)
The way to avoid this is to assess the trustworthiness and credibility of the black box - in other words, how much respect to give it - before assessing its arguments. Because if your black box is producing biased and manipulative arguments, assessing those arguments on their own merits, and assuming you’ll be able to spot any factual inaccuracies and illogical arguments, isn’t objectivity. It’s arrogance.
The problem mister Alexander here makes is to assume geniuses exist, or that original ideas are rare. They don’t and they are not. Spend more than 15 minutes with any toddler and you’ll easily reach those 100 new original ideas. Humans are new ideas machines, it’s what we do. It is spontaneous, not extraneous, to us. To assume otherwise is very cynical and disingenuous. Every person has the capability to be a genius, because genius is just a social label granted to extremely narrow interpretations and projections of an individuals abilities in an extremely concrete set of skills or topic. For example, re-contextualize with a diagnosis of autism and now suddenly they are not a genius, they have an hyper-fixation.
Also, the premise that every idea, specially brand new, can be judged and ruled as good or bad in a vacuum, right out of the gate, is also very stupid. The category of genius is a very recent concoction, stemming from the halls of Victorian moral presumptions and the newly developed habit of nobility of worshiping the writings they didn’t understand of people they had never met. This is what motivates the myth that genius whatever is always positive, in the popular mind. But, Goebbels was a genius at propaganda, everything that we do today in publishing is based on stuff he invented. That doesn’t mean all his ideas were worth listening to, and were he alive and you followed him on Twitter (lets be honest, he would have a Twitter), that would shed a rather poor light on you.
Because, and this is the important part, humans are not a loose collection of isolated ideas. We are not modular, freely separable and reconfigurable beings. We are holistic, evolutive and integral. Sure, we might be different things to different people (privately) and audiences (publicly) at different points in time, but our own sense of identity and being is not divisible. Steven Pinker is perfectly capable of simultaneously being a liberal, atheist and intelligent linguist; a mediocre intrusionists psychologist who forgot how history works; and a stupid mysoginist and racist. All at the same time, and never stop being a single integral person. It doesn’t require an imaginary score of good to bad takes ratio. That’s a stupid premise. You don’t keep a broken clock around in the off chance it might be right twice a day. Use a more holistic sense.
Remember, what’s behind the user name is (still more often than not) a full person, not a black box (except if it is a bot, of course).
I understand and see why he didn’t touched the moral aspect of his own argument. It is because any moral analysis completely dismantles his premises. Morality is the most important thing separating humans from animals and machines. Of course if someone is an evil POS it you should block and cancel their ass. It’s Karl Popper all over again, if we don’t rule out bad takes in the off chance there will be a good take, we end up with a Nazi bar.
Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate.
This is a very weird way to look at people.
Anyone can have an original idea, not just “genuises”. I don’t understand outsourcing your thinking, creativity, and your right to free association because some guy had a good idea once.
(And I don’t think my dad, the inventor of toasters strudle, would approve of this)
I have simpler policies. If someone I’m listening to is annoying and wrong more often than not, then I stop fucking listening to them.
I’m not sure when people started to think that they had to go about life listening to stupid opinions of annoying fuck wads they disagree with. But you absolutely do not have to live life that way.
pinker is a very bad guy and we should not be lionizing him for any reason
why would you follow someone you agree with?
if you want to learn, you search discord.
if you want to learn, you search discord.
Searching Discord is precisely the opposite of learning. You lose knowledge every second spent on Discord.
/ s
:) i can’t know, i’m not a discord user. Apparently i prefer “losing knowledge” on lemmy
They meant
dis·cord: lack of agreement or harmony (as between persons, things, or ideas)
Here, I was keeping it in a drawer because I thought I wouldn’t need it, but obviously I did.
/s
I couldn’t take the statement itself as sarcasm because you’re not wrong lol. It would have been more obvious if you glazed Discord instead I guess.
I thought my use of capitalized Discord would be subtle but noticeable that it was a joke. I guess I was too subtle.
if you want to learn, you search discord.
This is why when learning guitar I looked up guitar lessons and then looked for people who didn’t believe learning to play guitar was possible at all and the abilities instead were based upon innate talent and genetics! /s
Seriously, if learning was done by discord, then US politics (and cable news viewers) would be full of absolute scholars, instead of, you know, the exact fucking opposite of that.
guitar example does not work :/
politicians are not genuine in their discourses. Most are there for profit and they say things that even they don’t believe in 🤷
Why would listening to two sides of this help you learn anything? Hearing double the lies will teach you nothing.
after your comment, i went back to the top of this post and started reading all the comnents. It’s very interesting to read the arguments from many sides and see the nuances some people bring to the conversation.
That isn’t all discord.
Relatedly, if you think social media threads are a great way to learn stuff I don’t know what to tell you other than maybe try picking up a book and see if there’s a difference there.