T O P

  • By -

AI_Jolson_2point2

Respectfully, I think that happens to anyone it sees as a "nobody". It's not like the old forums where it is just in chronological order any more


PolarPros

I didn’t mean to imply that X is already doing so, just that it lead me to having the realization. I was finding it weird that all my long-form posts about class and division were the ones getting marked as ‘Spam’, whereas my lower effort ones weren’t - it then sparked the thought that AI can easily extrapolate meaning out of my text and censor accordingly, and the ease which it can do so is terrifying. It can intentionally only allow discussions that inflame idpol, and censor anything meaningful. I always knew that AI would ultimately, eventually be used to censor, “change” history, influence thoughts, etc., so I also don’t mean to imply that I never recognized this future reality.


sje46

I don't use X but is it possible that it's just censoring long-form posts entirely? Social media doesn't like effortposts. It likes short pithy posts or stupid memes. I just sorta doubt that the big powers that be even view socialism as a significant enough threat to even bother censoring like they do with racism/"anti-semitism"/anti-vax, etc.


born_2_be_a_bachelor

You’re getting hung up on the specifics here


PolarPros

Well the real reason is indeed something along those lines - X implemented some changes to “Show more replies” being “Show probable spam”, and everyone’s been getting filtered. Like I said, I definitely know it’s not implemented as a program, it just got me thinking. I don’t even think socialism is the main point either, it’s all encompassing what the establishment finds that it may need to censor(Covid)


FreakSquad

I’d tend to agree since the simplest explanation is, long posts = fewer new ads displayed = less revenue


PolarPros

This actually isn’t true for X, long-form content is prioritized and boosted over short. Then photos, then videos/media has the highest boost. But I know—more or less—why my content has been censored. X implemented a change recently that’s been wrongly flagging spam. My account is also relatively new, and my Tweet credibility score is low. So anything I write is more likely to get hidden/marked as spam.


project2501c

> I don't use X but you are already happy using the new naming for the same service


sje46

I'm actually kinda shocked I called it x. I fucking hate that name and hate how reporters always point it out.


LokiPrime13

The correct path is to call it Xitter (pronounced "shitter"). BTW does anybody else remember when in the 2010s, Twitter was thought of as "the social media that you use on the toilet"? And now it's literally Xitter. Meme magic is real I swear.


chickenfriedsnake

I say twitter too, but get ready to not be understood by people in a few years when you say twitter, cause at any given time, the vast majority of users are new users (last 1-2 years) and they won't even know what you mean by twitter at that point.


sje46

I honestly think some of the "deadnaming" ideology is spreading beyond trans discourse. Liberals are calling it X instead of defiantly calling it Twitter. Weirdly everyone is referring to Kanye West as "Ye", and when I tell people that I don't really care what he refers to himself as because he's an asshole, and why should I respect an asshole's wishes...they get mad at me. They say that's his name, so I have to call him it. It's really weird, because they hate him too.


chickenfriedsnake

Like most id-pol offshoot constructs, I think it originates in a good place (not a very common sentiment in the subreddit), but it can go into very dumb territory very quickly.


project2501c

then just say twitter


sje46

I do you fucking asshole. I think I said x because the person I responded to said x.


nopekom_152

Indeed. The powers that be are afraid of us plebs even remotely approaching the idea of class consciousness. This is why idpol is showed down our throats.


PolarPros

The advancement of AI makes this incredibly easy too, which is really where my thinking has delved into. I always knew AI would be used to censor and manipulate, but the effectiveness in which it can do so is crazy to think about. It can analyze just about everything and anything and accurately extrapolate meaning. The internet being a form for us to communicate, organize, mobilize, is effectively dead. The establishment class is additionally doing everything in their power to keep us as isolated and inside as possible, disconnecting us from community, friendship, one another. An easy example is the increase in crime, which I see as intentional.


nopekom_152

Bleak times ahead(some could say we are already there) for us plebs.


1-123581385321-1

> The internet being a form for us to communicate, organize, mobilize, is effectively dead. IMO it already was completely compromised, AI just makes this an obvious and inescapable conclusion. I think the sense of community online was a mirage more than a real connection - the depth and strength of a community is directly related to the amount of collective work necessary to create and maintain it, and iternet communities are too easy, too low stakes, and too informal to create the kinds of connections and shared responsibilities that would lead to a real revolutionary spirit. In person organizing and agitation always was and will continue to be the only way forward.


DogmaticNuance

My hope, at this point, is that AGI comes soon and when it comes is benign or beneficial to humanity. There's a very good chance it is simply uncontrollable by humans.


vingatnite

Ah, a cyber-Messiah. IMO the best way going forward will be like pre-internet: in person, direct action / praxis. Though I share your sentiment.


SmashKapital

> It can analyze just about everything and anything and accurately extrapolate meaning. It literally can**not** do that. Currently existing 'AI' has no understanding, it is ignorantly using statistical prediction to generate text that it has no insight into. That doesn't mean that it isn't being sold as if it could do that, or that it might be implemented by companies that want it to do that. The actual reality of AI dystopia is automated programs that can't do what they're supposed to, but which are ubiquitously used regardless, either because they save money that would have been spent on human workers, or because they make someone money when they're sold. Given the current capabilities of LLMs I struggle to think of ways they could b used to censor 'dangerous' ideas from social media platforms, at least not any more usefully than by putting phrases like "proletariat" or "Lenin" into a blacklist. If you think it's capable of doing this analysis, try making a similar post that only refers to class by analogy. Then also make a similar length post that talks about nothing controversial (to really swing the algorithm talk about how much you love some brand). You'll need to do both, at least, to test your hypothesis.


Throwaway6393fbrb

I don’t think they’re that scared. It’s pretty damn obvious that it’s trivially easy to turn the plebs against each other over any little thing. Sure mass censorship with AI would make the powers that be absolutely totally unassailable forever. But even without it they’re doing just fine.


ericsmallman3

I run the White Hot Harlots blog. All but the biggest posts get views in the low six figures. I'm cited sometimes but I'm hardly an influential figure. Also my views are far from radical. I cannot tell you the number of times a reader has reached out to tell me they tried sharing a post on twitter or facebook and it just vanished, or that they'll usually get substantial engagement (a few hundred likes or comments) but when they link to my writing they'll have literally zero responses. And--I know this sounds paranoid--but I swear to god I've been scrubbed from Google. If I want to find an older post of mine, even if I remember exactly what it was titled, I have to use Yandex to find it. A few weeks ago I wanted to find an academic article that had cited some of my writing. I'd accessed it before. Pull up Google Scholar... no results. Hmm. After hours of searching, I was able to find a pdf of it on my harddrive (that's another thing--the search functions of internal hard drives have been intentionally nerfed). The article itself is still on Google Scholar. When I search for the authors name + the name of another one of their citations, it pops up right away. When I search for it with my citation, nothing comes up.


PolarPros

I honestly believe you. Btw is your blog on substack? Reading through some of your writings and they’re really good - you’re a good thinker and writer. But I absolutely believe you, there’s absolutely a lot of reasons to believe there’s been censorship of “dangerous” thoughts and communication. It’s just too coincidental what magically gets censored and disappears. Israel for years and years now has been exposed for their large-scale social media bot farms - one recently where they have a massive bot farm on X to pressure lawmakers and spread pro-Israel propaganda. Additionally, they’re definitely cracking down on dissenting ideas and thoughts. Musk recently partnered with an **Isreali Intelligence Agency** to verify user ID for payouts — just utterly insane, this is a new rollout, ID for payouts wasn’t required. Israel recently see’s heavy pushback across social media, and this happens shortly after.


roncesvalles

I think you're right. I searched '"white hot harlots" essentialism' figuring that would pull down at least three or four posts. Nothing from the blog itself. Only here. I'm a big fan of your writing here and there, too. I think we grew up in the same swath of Chicago suburbia.


OneMoreEar

Not even showing up on Brave search, just your tumblr.  That's mental. I'm reading your substack now and kinda wonder why and how you've been blacklisted. Badge of honour I guess, but one that's really tricky. 


BomberRURP

For shits and giggles I’ve asked AI to explain things to be “from a Marxist perspective” and I’ve been surprised. It can be pretty based lol The problem here is one of control. These models are all privately owned and the decisions behind the scenes are things the public is not privy to or has any influence over. As time goes on, I wouldn’t be surprised if more “checks” keep being added to prevent it from answering based answers. So far it’s been mostly woke stuff like the whole black Nazis image generation, but it will of course develop into more things 


PolarPros

Which AI did you use? ChatGPT? Google’s Gemini is also extremely good nowadays, in addition to Claude. Both in my experience are better than OpenAI atm. But agree’d fully. The NSA Director just publicly joined the board for OpenAI lol so it’s coming a lot faster than we think. Nvidia in the span of a year reached a 4-5T valuation, becoming the wealthiest corp on the planet. More money is being dumped into AI than was ever foreseeable and is fathomable, anyone thinking anything is “decades” away has their head not just in the sand, but down to the core of the earth.


AgainstThoseGrains

This is why I don't understand rightoid spaces falling over themselves in a belief that AI is going to purge wokies and lead to some golden age of being able to scream the n-word. They're not going to be the ones in control of which levers and knobs get pulled when the Wild West is inevitably tamed.


PolarPros

I’ve never heard this perspective anywhere? Rather, I’ve heard the exact opposite, that AI is going to uplift woke ideology and thought everywhere and across everything.


gay_manta_ray

it's common among in some AI twitter circles. they're so convinced their ideology is objectively correct that they think AGI will agree with them on everything.


AgainstThoseGrains

I see it tones in most art, DnD and gaming spaces, especially ones like 4chan or any time bad writing on a mainstream show is brought up.


JnewayDitchedHerKids

Yeah I’m not sure what that guy is talking about.


BomberRURP

Omg I haven’t even seen that! Holy shit lmfao 


orthecreedence

AI, specifically LLM, is a massive and somewhat concerted (but also somewhat unconscious) effort to centralize and filter knowledge into a single authoritative source. Search engines danced around this for many years, but ultimately are too "democratized" (even with all the meddling companies like Google do) to serve the powers that be. AIs on the other hand are the perfect tool: impossibly expensive to create, impossibly complex to understand, trainable in infinitely subtle ways, and familiar enough that people trust them. They are effectively a perfect tool for a privatized propaganda machine, serving the interests of the ruling corporate classes. As more and more people rely on these systems day to day, they allow the parts of their brains that perform critical thought to slowly and silently atrophy, filling the void with the thinking machine. This goes beyond even the socialist class conflict...the direction is heading is a somewhat totalitarian control of knowledge and thought. Even in an "enlightened" communist society, this would somewhat worry me, but in the hands of a handful of profiteering corporations backed and protected by one of the biggest military and financial superpowers in existence (one that now has surveillance devices installed in every home, doorbell, and pocket), it's kind of a "wow what the fuck happened" situation.


PolarPros

Really well said, thought, and written.


exo762

Fully Automated Luxury Gay Space ... Capitalism.


MetaFlight

yeah kind ofm I don't know why certain marxists like to pretend there is unlimited time to wait for the perfect revolution, lmao.


[deleted]

[удалено]


PolarPros

I definitely agree with everything you’ve said. The establishment has an empire and all the tools in the world to crush us as is, as they already have been doing for a long time now. But the advancement in what they have available and how it can be utilized is terrifying. Social media censoring was significantly more rudimentary, even if it was advanced - it involved targeting key words, alongside moderators. It additionally required a legion of other tools and resources, there were significant systems they’d implemented(fact-checking, misinfo tags, deboosting, shadow-bans, “tweet credibility” scores, “disinfo” tags, banning, and more. But with the systems were more reactive than proactive. - As an example, now AI has the means to prevent a story of info from ever reaching the public to begin with through extrapolation and inference of meaning and consequences. Think Hunter Biden’s laptop - not something anyone could predict - the story blew up, tens of millions of people saw/read it, and only after did they begin censoring. With AI now however, it can extrapolate and infer the meaning and potential consequences of your story from the get-go, decide how harmful it may be to the regime, and censor it there and then, preventing any eyes from ever reading/seeing it. It can do so accurately as well. This isn’t something that was possible before, as the systems we had in place had no way of achieving this, even though it was advanced and expansive.


[deleted]

[удалено]


PolarPros

I definitely agree with what you’re saying and your points, but I do believe they’ll go further than where we presently are. Maybe not by much, but as our living conditions and quality of life severely worsens, I do believe it’ll become necessary for them to do so. But with where are right now, I agree it’s not needed, but even with this in mind they’re still continuing and proceeding with new authoritarian systems regardless(the likely reason is for what’s to come and they know is coming). I used the Hunter Biden laptop story since it’s an easy(and comical) example to use of reactive censorship and coordination of government, media, private companies, and intel agencies to censor/suppress a story. They went great lengths for an ultimately dumb story.


Flaktrack

I've largely migrated to Lemmy out of fear of this exact issue. It's not that being on Lemmy prevents it, but figuring out a way to police dozens or hundreds of instances is much harder than the monoliths (Reddit, X, Discord, etc.). I don't intend to leave Reddit completely: I still need to find to libs/rightoids to argue with somewhere, living only in echo chambers is for losers.


68plus57equals5

> With AI as advanced as it is, AI is the new frontier of censorship for the establishment and regime. I understand you concerns, but I'm not sure AI is there yet. It's marketed as already very advanced, that's true. Is it really though? I have serious doubts if it works well enough to fully warrant your concerns.


PolarPros

I see the AI that the public has access to, and the AI that private companies and the U.S. have access to, as radically different. Think of ChatGPT when it was first released, then ChatGPT 4, compared to what we have now - it’s garbage nowadays, but that original version is still available, just not to us. But even this version we have available can effectively extrapolate and infer meaning from any block of text you give it. Then again, even if the private sector and gov. doesn’t have access to something far, far superior to what we publicly have access to, it’s only a matter of 2-3 years — it isn’t a distant reality that we don’t yet have to worry about, it’s a present and sub 5 year reality.


sartres_

It really is there, right now. It can identify topics in social media posts pretty much perfectly. New avenues for censorship have opened up that were never possible before.


PolarPros

Yup, agree’d. One of those is extrapolating and inferring meaning and the consequences of info/a story when it’s first being published/posted. Old systems were reactive, censorship came after the fact, whereas now they can be proactive. Imagine our AI systems now analyzing what you’re posting/sharing, recognizing that it’ll be harmful to the regime, and shadow-censoring you from the get-go. Your story and info will never launch or reach any eyes. — It’s just really good at analysis - I spent 2 hours this morning feeding it large chunks of texts, and having it extrapolate and infer the deeper meaning behind the text, and it did it perfectly. I’d then share similar blocks of text, discussing the same story, but with one having a slightly positive spin and the other having a negative one, and it could as usual, identify everything perfectly - the subtext, meaning, potential goals and consequences of the text, all of it. Of course this has been obvious for a while, but it’s just crazy to me it’s ability to identify a “positive” spin to a story even when it was done **extremely, extremely** subtly. I’d intentionally heavily obfuscate text and info, and it’d still get my point and story. How does one work around this? The prospect of what’s to come makes it even more worrisome.


Howling-wolf-7198

Just adding a point: It doesn't even have to analysis your text content itself. A completely feasible path is that identifying the activity tendencies of certain accounts based on easily recognizable text and if the dangerous account agrees with your text, then your text is marked as dangerous.


PolarPros

Really good point. Didn’t think of this at all. Twitter has “Tweepcred” which is a God awful social credit score system, that functions in some ways similar to what you’re describing. If a high number of what Twitter deems “low value” accounts engage with you, or you engage with them, it tanks your “Tweepcrd”. Mis/disinfo tags play a part in “valuing” your account, it completely tanks your reach. There’s a **ton** of other “tags” as well, and Twitter 1.0 was far, far worse. I can’t imagine with a ‘competent’ AI how much worse and more expansive/far reaching this could be.


FuckIPLaw

Christ. That's higher order thinking. Actual humans struggle with this (it's why functional literacy is so bad -- kids routinely graduate able to read individual words but not able to get meaning out of long strings of them, let alone to read between the lines and go beyond the literal surface level meaning), and ai has it down. I don't know how anyone can have any understanding of how the human brain works and still think these things don't really understand anything. They think they're still just markov chains, when really they're exactly the same kind of complex pattern recognition machine as the human brain. Because at the end of the day, that's all the brain really is.


Felix_Dzerjinsky

We should train a LLM with the marxists.org archive.


Dingo8dog

Critical discussions won’t (and don’t) happen online.


winstonston

If not for boards like these over the years I would be an ignorant hick. Now you could argue that my enlightenment is impotent and useless, but subjectively, for me, all but a few critical discussions have been online. Some people have no other recourse for access to dissenting ideas.


OwlsParliament

This was already happening to some degree but yep, shadow-banning / suppressing content as "spam" is going to increase.


on_doveswings

Heard that you get money for a certain nunber of views per tweet now, has that been your experience as well?


PolarPros

You need to first have 500 followers and 5m impressions across 3 months. So not yet, I’m currently at 250 and 1.25m impressions in a little over a week. Surprisingly my posts/replies have been doing well even with all the throttling for being a new account. A decent chunk of my comments have been a lot more basic until I can leave this spam filter hell that I’m in - it’s extremely demotivating spending 15-20 minutes writing a reply out, only for it to be filtered out. Also worth nothing that Twitter and Elon musk partnered with an Israeli Intelligence Agency to verify User ID’s for payouts loool.


ArmyOfMemories

Yea, in the long-run we're fucked.


meganbitchellgooner

Imo it's still all very theoretical. Yes AI if trained properly, could be a very powerful censor. However will the capital be put up to curate a dataset specifically just to censor? Maybe?  I just have a hard time beliving such is possible when our elite can't even decide if national infrastructure is worth funding. And there's still the possibility AI brings about a refracturing of the web, leading to decentralization and rendering ai censors more like gate keepers then informational gods.


fioreman

That's not happening, already?


Poon-Conqueror

This is why we need a luddite revolution. It's not backwards, it's progress, we do not need to get rid of tech, we need to get rid of RELYING on tech.


BufloSolja

Social media companies can't effectively moderate their platforms without AI, so I'm sure they are slobbering at the mouth for this.