When I tried it in the past, I kinda didn’t take it seriously because everything was confined to its instance, but now, there’s full-featured global search and proper federation everywhere? Wow, I thought I heard there were some technical obstacles making it very unlikely, but now it’s just there and works great! I asked ChatGPT and it says this feature was added 5 years ago! Really? I’m not sure how I didn’t notice this sooner. Was it really there for so long? With flairs showing original instance where video comes from and everything?
Why do people bring this up every fucking time?
Honest answer? It’s easy and it won’t judge you for asking stupid questions.
Edit - people are replying as if I said I do this. I’m sorry for the confusion. I don’t. This is why I see other people do it. When it comes to the general population, most people don’t care, they just want easy.
No it’ll just hallucinate shit that’ll make you look dumb when you go and state it as fact.
Search engines and Wikipedia don’t judge you for asking stupid questions either.
People also say they googled, unfortunately
Not the same thing.
google allows for the possibility that the user was able to think critically about sources that a search returned
chapGPT is drunk uncle confidently stating a thing they heard third hand from Janet in accounting and then taking him at his word
Ai’s provide you with links so you can use your critical thinking
People before ChatGPT thought critically of things on Google as much as they do ChatGPT today.
People before facebook thought critically of what they saw on the news as much as they do facebook today.
Sure, people didn’t think about things too much at any point in time and sources aren’t always perfectly reliable, but some sources are worse than others,
what do you mean? it’s like being angry that people bring up I googled something
google: I checked the listing of news sites to find information about a world event directly from professionals who double check their sources
chatGPT: I asked my hairstylist their uninformed opinion on a world event based on overheard conversations
I mean a moron could find the wrong information from google and your hairstylist could get lucky and be right, but odds are one source provides the opportunity for reliable results and the other is random and has a massive shit ton of downsides.
Lots of legitimate concerns and issues with AI, but if you’re going to criticize someone saying they used it you should at least understand how it works so your criticism is applicable.
It is useful. Chatgpt performs web searches, then summarizes the results in a way customized to what you asked it. It skips the step where you have to sift through a bunch of results and determine “is this what I was looking for?” and “how does this apply to my specific context?”
Of course it can and does still get things wrong. It’s crazy to market it as a new electronic god. But it’s not random, and it’s right the majority of the time.
It might be wrong more often than you think
https://futurism.com/study-ai-search-wrong
Besides the other commenter highlighting the specific nature of the linked study, I will say I’m generally doing technical queries where if the answer is wrong, it’s apparent because the AI suggestion doesn’t work. Think “how do I change this setting” or “what’s wrong with the syntax in this line of code”. If I try the AI’s advice and it doesn’t work, then I ask again or try something else.
I would be more concerned about subjects where I don’t have any domain knowledge whatsoever, and not working on a specific application of knowledge, because then it could be a long while before I realize the response was wrong.
Right: it skips the part where human intelligence and critical thinking is applied. Do you not understand how that’s a fucking problem‽
Could you try to understand what I’m saying instead of jumping down my throat?
If I want to turn off a certain type of notification in a program I’m using, I don’t need to sift through three forum threads to learn how to do that. I’m fine taking the AI route and don’t think I’ve lost my humanity.
Googling at least until fairly recently meant „I consulted an index of Internet”. It is a means to get to the bit of information.
Asking ChatGPT is like asking a well-behaved parrot in the library and believing every word it says instead of reading the actual book the librarian would point you towards.
I use it instead of search most of the time nowadays. Why? Because it does proceed to google it for me, parse search results, read the pages behind those links, summarize everything from there, present it to me in short condensed form and also provide the links where it got the info from. This feature been here for a while.
It’s all good, Lemmy users are strongly anti-ai and are genuinely learning right now that chatgpt, mistral, perplexity etc can search the web
We aren’t any a. I. We just ain’t lemmings.
I use a I as an inspiration. That’s all it is. A fancy fucking writing prompt.
You use AI for writing prompts? That’s pretty cool, a lot of people use AI for writing prompts, a lot of writers say it’s great for getting rid of writers block
Let’s just keep adding more and more layers like a game of telephone!
What do you mean?
Go ask chatGPT
I don’t use ChatGPT, I use LM Studio which runs Local LLMs (it’s like AI you can run locally on your PC, I have solar and a solar battery so this means there’s no co2 emissions from my queries, I primarily use this for coding questions and practice, translations from Russian/Ukrainian/French, practising french, etc), then I use mistral AI second (french based), then third perplexity (american)
I also use Ecosia.org for searches as well
I asked mistralai/mistral-small-3.2 to elaborate on what you said, Is this what you meant?
The phrase “Let’s just keep adding more and more layers like a game of telephone!” is a metaphorical way of expressing skepticism or concern about the accuracy and reliability of information as it gets passed through multiple layers of interpretation, especially when involving AI systems.
Here’s what it likely means in this context:
Game of Telephone Analogy: In the classic “game of telephone” (or “Chinese whispers”), a message is whispered from one person to another in a line, and by the time it reaches the end, the original message is often distorted or completely changed due to mishearing, misinterpretation, or intentional alteration. The user is suggesting that relying on AI systems to search, summarize, or interpret web content might introduce similar layers of potential inaccuracies or biases.
Layers of Interpretation: The “layers” could refer to the steps involved in using an AI system to access and summarize information:
Concerns About Accuracy: The user might be implying that each additional “layer” (especially when involving AI) could introduce errors, biases, or misinterpretations, much like how a message gets distorted in the game of telephone.
Hostility Toward AI: Given the context you provided (Lemmy users being “strongly anti-AI”), this comment likely reflects a broader distrust of AI’s ability to accurately and reliably convey information without introducing new problems.
In essence, the user is cautioning against blindly trusting AI systems to handle information retrieval and summarization, suggesting that doing so could lead to a breakdown in accuracy or meaning, similar to how a message degrades in a game of telephone.
And it still gets shit wrong.
Because they know it’s not accurate and explicitly mention it so you know where this information comes from.
Then why post it at all?
Because they’d still like to know? it’s generally expected to do some research on your own before asking other people, and inform them of what you’ve already tried
Asking ChatGPT isn’t research.
ChatGPT is a moderately useful tertiary source. Quoting Wikipedia isn’t research, but using Wikipedia to find primary sources and reading those is a good faith effort. Likewise, asking ChatGPT in and of itself isn’t research, but it can be a valid research aid if you use it to find relevant primary sources.
At least some editor will usually make sure Wikipedia is correct. There’s nobody ensuring chatGPT is correct.
Just using the “information” it regurgitates isn’t very useful, which is why I didn’t recommend doing that. Whether the information summarized by Wikipedia and ChatGPT is accurate really isn’t important, you use those tools to find primary sources.
I’d argue that it’s very important, especially since more and more people are using it. Wikipedia is generally correct and people, myself included, edit incorrect things. ChatGPT is a black box and there’s no user feedback. It’s also stupid to waste resources to run an inefficient LLM that a regular search and a few minutes of time, along with like a bite of an apple worth of energy, could easily handle. After all that, you’re going to need to check all those sources chatGPT used anyways, so how much time is it really saving you? At least with Wikipedia I know other people have looked at the same things I’m looking at, and a small percentage of those people will actually correct errors.
Many people aren’t using it as a valid research aid like you point out, they’re just pasting directly out of it onto the internet. This is the use case I dislike the most.
AI seems to think it’s always right but in reality it is seldom correct.
Sounds like every human it’s been trained on
No, it sounds like a mindless statistics machine because that’s what it is. Even stupid people have reasons for saying and doing things.
Yes, stupid people’s reason is because Trump said so, so it must be true
If those people are inaccurately spouting ‘facts’ from some article they can barely remember, yeah that’s pretty much exactly the same output.
How would you phrase this differently?
“It looks like this feature was added 5 years ago.”
If asking for confirmation, just ask for confirmation.
I think it’s because it causes all of Lemmy to have a collective ragegasm. It’s kind of funny in a trollish way. I support OP in this endeavour.
“I used chatgpt”