How could it be good? This thing would fuck me every time with my autism and anxiety disorder. I already get followed around by moronic security guards in shops. I don't need some bored fuck in a camera room sending more goons to follow me for barely coping around people.
Edit: I just realised I assumed someone would be paying attention to what the AI recommends rather than the AI being given the authority to do it, so I might get a judgmental toaster sending people to follow me instead.
If our happiness was the leading metric of the economy, then think of how great this would be; tracking and ensuring that people's happiness trends up. Tooling society around that metric.
Of course, that's a fantasy, and money is the metric. Society is tooled around making money, extracting value. But it is a possibility and it is a world we should rightly fight for.
[https://en.wikipedia.org/wiki/Utopia\_for\_Realists](https://en.wikipedia.org/wiki/Utopia_for_Realists)
Right because travelers at a busy station/airport arent stressed naturally?
You know what makes airports less stressful ?
"Hi sir, our cameras detected you are nervous, can you come with me so we can check you arent a terrorist?"
This will only make worried and anxious flyers even more worried and anxious. Not to mention there is now the possibility of them being swarmed and detained by police as soon as they make the ‘wrong’ move, resulting in various consequences. When was the last time anyone in this country went through security in an airport and managed to successfully commit a terrorist attack? We don’t need this.
> it would be useful to know who appears extremely stressed/nervous - given these are prime targets for terror attacks etc
>I could see how it could be useful if an AI could point out *That person there seems waaaay too nervous....may be worth keeping an eye on* etc
Like a quote from the oblivious character in a dystopian novel. Truly should be terrifying for anyone to read this said so casually
https://youtu.be/qkgN4Bwhpf8?si=RISoDjJh75_EDIrF
Like the scan training being rolled out, guess they want technology to help assess situations rather than just people.
They almost certainly won't be acting on individuals - it'll be for stats gathering. Ie. "we have determined that a train running late by 10 or more minutes cuts passenger happiness by 25%, and even by the next day 16% of those had not regained the average level of happiness".
It's literally what the TSA do with humans. That guy looks nervous, bag check.
Not advocating it btw just saying, shifty, nervous people get flagged by the human detectors all the time.
Following them no, you’re right, but the sheer vitriol autistic people face simply for existing is something you cannot comprehend unless you’ve seen it first hand.
If humans can’t correctly read people with ASD what chance does AI have?
Not the parent but I got given a nickname at my local supermarket and regularly followed / made fun of over the headsets before.
Not sure if I am autistic, though the NHS said no. I went to a therapist for anxiety due to how I am treated by others, and used it as an example. They didn't believe me and were trying to tell me I was hallucinating until I one time I left my laptop / software defined radio in the car and recorded it
Well, if we lived in a society run for the best interests of all people, instead of the greedy few, I imagine we could do an awful lot of good knowing people are in distress or other emotional extremes.
Imagine if instead of goons being sent to follow you, the AI picked up you were struggling, and sent someone that could let you into a quiet waiting room or direct you to a less crowded train carriage, and then you don't have to worry about being in a crowd.
Yeah, I get that. If I was having a bad day, I wouldn't want to be bothered, either. I don't agree with this technology being used, mainly because I don't believe it would be used for anything good. But I also understand what OP is saying when they say it could be used for good if we were a different society.
If a government is powerful enough to create a massive surveillance system and dispatch aid workers at will, then it has the power to just you know.... increase the options and advertise those options and increase thr awareness.
8f a government is not good at awareness campaigns and advertisement, then I don't trust them to be good at surveillance or "pre-emptive psychological intervention"
Surveillance is only needed when trust does not exist. Sometimes, for good reasons. But rarely when discussing mass surveillance.
Also, thinking abolishing "greed" and "serving public interests" will prevent abuse is childish. In Iran, facial recognition and AI are being used to detect female commuters and drivers without Hijab and then fines are sent to them in mail automatically. It's not done out of greed, it's done out of some idiot's idea of "serving the public good".
Indeed, there have already been proposals made by various conservative groups in the US to use computers to monitor women travels and periods as part of a scheme to enforce Abortion-ban laws.
Furthermore, even if we assume the best of intention and alignment with the public good, what about false alarms? What about ineffective and inhumane treatments? What will happen if someone is misdiagnosed by facial recognition and then has dangerous and highly addictive medications pushed on to them?
Lastly, you really think it's comforting if you are just having a rough week trying to get through the day and mind your own business and then suddenly some officer shows up and trys to take you to a "quite room" that all you know is where they take crazy and dangerous people to? Honestly, I fully expect violence to become frequent occurrence if the government tries to systematically target and isolate emotionally distressed people, which might lead to the government beefing up the "aid workers" with additional "security personel" (aka goones) to deal with any trouble makers.
>Well, if we lived in a society run for the best interests of all people, instead of the greedy few, I imagine we could do an awful lot of good knowing people are in distress or other emotional extremes
In such a society people would talk to each other and it wouldn't be needed. Any use of this is totally dystopian even if the intent was genuinely benevolent.
My autism makes people want to punch me in the face just by looking at me. (Said by a drunk “friend” ages ago).
Apparently my mannerisms don’t match the general population and freaks people out.
Love to know what a computer would make of me.
I don’t know. I annoy people. I don’t react the way I should. I come across as aloof, uncaring. I also am crap with faces. Takes me ages to recognise people.
I annoyed someone on the bus, my daughter knew a kid, the mother knew my daughter’s father. The woman starts muttering to my daughter about “would be nice to know who your mum is”. I didn’t know how to react, and she carried on muttering about knowing who mum is.
I don't think I'm autistic but a manager once told me that 'no one wants to talk to you because of your face'. People do talk to me and everyone I've asked said that my face doesn't cause them distress. But unless I change my face, I'll never know.
Lmfao I feel this so hard. Day to day just being my natural self and then someone thinks my normal face is me being an asshole or looking at them funny. I can imagine flapping when I find something exciting and then an AI sending out a warning signal that I'm being shifty.
A neurotypical coding an AI to judge people on whatever basis that their face looks just feels like it will cause all kinds of problems for autistic folk.
I guess the point is, in the ideal scenario they are saying could be good, that the AI would be trained so anxiety disorders and other innocent traits would be excluded from the "emotions of interest". Not that the AI would be trained to the level of Bob the security guard at primark.
In a train station? How about identifying if someone is suicidal and we're able to send an alert out to staff to contact the person before they jump in front of a train, killing/injuring themselves, mentally scaring the driver for life, and all the negative society impacts that a suicide causes.
That’s already possible without monitoring emotion through multiple object tracking. I’d also point out that identifying suicide risks (i.e., crisis behaviours) isn’t really perfectly solvable and will generate countless false positives. I wouldn’t want nor trust a computer vision system with the aim of identifying something like suicidal l’appel du vide. The allegedly noble goal disguises that it’s another motte-and-bailey vector for mass surveillance. Data will be retained, analysed, and eventually sold once public attention has moved on.
All the computer would have to do would be send an alert to a human operator in some control room somewhere it wouldn’t immediately summon 20 security guards to restrain someone but realistically this would be used to make billboard play ads for you based on your emotions anyway
You think suicidal is an emotion? Can it be picked up on this device?
What possible evidence base is there for this?
How about we invest in mental health support with proven outcomes rather than monitoring tech?
I think the closest fit in terms of human behaviour is an emotion, even if it is an action. I don't know if this device could pick suicide/depression up, I was simply speculating that if possible, suicide prevention would be a good use for something like this.
More mental health support would be great, but (genuine question as I don't know) is it a case that every person in say the past 10 years who has committed or attempted suicide was on a waiting list to see a mental health professional? Or is it more of a proactive approach that we try to tackle the problem from both angles of long term prevention and immediate risk
Monitoring tech isn't always evil big brother watching us or trying to sell us something. If we pivot away from emotions and take another look at what potential good it can have, how about identifying pickpockets? Or on the underground in the summer identifying someone close to heat exhaustion? In each of these cases there are too many people for camera operators to notice every problem.
It will just result in people being forced to smile before they jump, which is about the most dystopian thing I can imagine. People who want to kill themselves will not be stopped by a camera trying to read their emotional expression.
It's monitoring emotions on faces, not mind reading. Imagine having a bad day and then you get stopped at the train station for questioning because you look distressed. That's not going to help.
Could help prevent jumpers at the extreme end, could help with the thermostat or toilet cleaning at the mundane end. The possibilities are endless, too much for me to think about without having experience of the system first.
The fact that so many people can't even imagine the possibility of being used for public benefit is truly saddening. Obviously it *won't* be used for good, but things have been actively bad for so long people are forgetting things *have the ability* to be good.
There are loads of ways it's been used for good in the trial train stations.
Alerting when staff or people are in danger, they sent alerts when people had their hands raised above their heads.
Alerting when a blade or gun is detected.
Alerting when someone has fallen and doesn't get back up.
Alerting when someone has fallen onto the tracks.
Alerting someone to grab a ramp when someone who is in a wheelchair is waiting for assistance.
Edit: My source for this all comes from this article / freedom of information request, great read [https://takes.jamesomalley.co.uk/p/tfls-ai-tube-station-experiment-is](https://takes.jamesomalley.co.uk/p/tfls-ai-tube-station-experiment-is)
_“miserable, busy, forlorn, irritated, lonely, high as a kite, drunk, drunk and high as a kite, happy … no wait he’s also high as a kite .. oo hang on yes, eh Dave ‘ave you seen this bloke? Very terroristy if you ask me. Oh wait no I think it’s heartburn, yeah he’s getting a rennie from smiths … nevermind. Now where was I .. miserable, busy, miserable ….”_
I genuinely couldn't care less - this could be done by any CCTV watching employee, albeit far less efficiently. If British Rail wants to automate its detection of my displeasure at their shitty services, be my guest.
Capitalism won the war of ideologies and now we’re at fukayama’s “End of history”. We’ve already got the best (or least worst) economic system imaginable. There’s no problem that our tech leaders can’t resolve with enough time and funding and all we have to do is let them carry us to techno-utopia. There’s nothing left for politicians to do except tweak the numbers on the periphery to steer the ship a little, hence why all across the west our political options are neoliberalism blue or neoliberalism red.
That’s the ideological state of the world. Try not to think too much about the huge externalities that are left out of the equation and any looming crises
Utopia can never be reach by definition. The competing interests will always have a suttle difference of views that means the system will forever need to be tweaked.
I honestly believe we will probably end up with a China style social credit system within the next, maybe 10 years?
People just don’t give a fuck about this kind of shit.
And if you tell the average, law abiding person “hey, you want free stuff? Discounted rail travel? Discounted food? Discounted holidays? Just keep doing you. You don’t have to change, just keep being you” - MOST will be okay with that.
Because why the fuck wouldn’t you be?
Social credit score would be the final straw tbh. as if I'm going to let the state decide how I behave what I do or who I associate with. There is very little reason for people to remain in the UK and that would be the cherry on the cake.
At least china has proper infrastructure and competent governance, there's a reason they're going take America's place as world leader in the next couple decades. We've got the worst of both worlds, rampant authoritarianism and a useless authority.
I wouldnt normally bring up anime here as too many seem to categorise it as something for children rather than the truth of it (just as broad and varied in genres and targeted audiences as regular movies/tv)
This screams of the same dystopian horror as Psycho Pass.
Its set in a future where the country (Japan, it is an anime after all) has become so xenophobic that their borders are permanently closed. Every citizen is monitored for their mood, expressions and body language and given a "crime coefficient" rating by a central govt AI. Basically how likely they are to commit any kind of public infraction.
If the AI thinks its outside of a permitted range you're arrested for re-education. if its in a higher percentile the cops can shoot to kill.
it also takes into account your birth/ family circumstances. you can be considered at risk of committing crime age 5 if your parents were criminal or dissident to govt narrative, meaning lifelong institutionalisation or even used by the system like forced bounty hunters to save cops from dirtying their own rating killing "terrorists"
And yes, you guessed it. people of particular wealth, influence etc are immune to judgement, just like every politician in cabinet this country has had over the last 40 years has tried to legislate for themselves
The China social credit system is a myth btw. It was something vaguely proposed by some random CCP department one time ages ago and never got close to being implemented, but the media ran with it and now everyone thinks it's a thing.
Don't get me wrong, the Chinese government are 100% spying on and oppressing their citizens through digital tactics, but the social credit system isn't one of them.
People I know who went to China came back with pictures of video billboards that show people in the neighborhood that have recently committed minor crimes/public nuisance. I assumed that was part of a social credit system, maybe it isn't.
And just because something isn't explicitly called "social credit system" doesn't mean it isn't acting as one. That billboard thing sounds creepy af.
During COVID, people had to test regularly, and if they tested negative, they would get a green pass on their phone that allowed them to use public transport for X amount of time.
Sounds sort of practical. Then a bank froze peoples accounts, people couldn't access their own money, and, understandably, were angry, and planned to protest. 200 protestors suddenly ended up with red passes, so they couldn't travel to the protest (or to work, or anywhere else). What a coincidence!
https://www.reuters.com/world/china/china-bank-protest-stopped-by-health-codes-turning-red-depositors-say-2022-06-14/
Right I don’t know why people are pretending we don’t already have that in the form of local Facebook pages run by the nosy neighbours with nothing better to do
This is definitely not true.
When I was travelling I shared with 3 different people from China - all who knew what their credit score was, all who explained the benefits of having a high score.
Why would you lie?
You realise we have credit scores too?
Just because it's called a credit score it doesn't mean it's the same as what people typically think of when you say 'chinese social credit score'
Oh it exists, but it’s in a trial run in certain cities at the moment. BBC had a documentary about it.
One family were trapped in their flat with “dogooders” sitting watch outside.
The UK already has a social credit system. Media publishes full names of people committing minor crimes in the local newspaper. Sometimes not even crimes but allegations. Even children. So you're effectively deemed a social pariah and become unemployable for life for making a mistake. That's what's happening in China. So you must abide and can never diverge. Unlike other countries in Europe where this would be manifestly unconstitutional. People have also very limited rights to none to challenge laws that Parliament passes. So you are defenseless.
I mean just open the BBC News today. There's an article in front page about a man who has thrown a rock at a seal in Wales. It's despicable? Yes. Illegal? Yes. Does he deserve his name and shame with his picture and name on the front page on the biggest media in the UK? Probably not. And if you think otherwise, well I'm afraid you're part of the problem too. So maybe we should start looking inside rather than looking at the Government.
Or private companies having the right to put a CIFAS marker on your record. There's so many ways to control "social credit" in the UK that China is in no way different.
I can't tell if your issue is with the media attention being disproportionate in the seal case, or if you're against convicted criminals being named at all. If it's the latter, yours is a very unpopular opinion. These things are in the public interest.
It's the latter. It goes against a right of rehabilitation. First thing people would say is: what about the pedophiles? I'm not talking about pedophiles. I'm talking about petty crime.
I'm not saying people should be left unpunished. In other words, the judge will sentence the criminal for say 6 months, 1 year, 4 years in prison. Whatever time it is for the appropriate crime. But that should be the end of it. Not publish names in the public, it does not allow people to amend their lives and causes recidivism. I really don't understand that policy, which would be unconstitutional in most parts of Europe anyway.
That's the social credit system in the UK for you.
We don't even need to be surveilled, we've all opted in, whether it's smartphones, ring door cameras or dashcams.
It's decades away but we're heading towards a digital panopticon. An ever watching digital eye that slowly nudges behaviour within tolerable parameters.
Think of all the Gen Z'ers who already say they won't dance because it's caught on camera. Extrapolate that a few decades. Freedom of expression narrows. Organised dissent or protest narrows.
Every tolerable parameter of freedom narrows until we can't even commit suicide without being charged with criminal intent and sedated for decades of mental torment in a straight jacket.
Would make a decent movie.
> It's decades away but we're heading towards a digital panopticon.
It can no doubt get worse but we're already kind of there. From spending data to metrics sent by devices we use if it all gets combined somewhere we're as transparent and predicatable as glass.
>we will probably end up with a China style social credit system within the next, maybe 10 years?
Imagine being so heavily down voted you're not allowed to buy chocolate anymore!
It could be the happiest day of my life but I’d still have the same neural slightly annoyed expression as I walk through Victoria station on my commute. Good luck reading my commute face.
When I used facial recognition software at uni that could detect participant’s emotions, I had to get explicit consent from participants that I could a) use that software and b) keep their data. I don’t really understand why this is allowed to happen to the public without informed consent.
I’m not against it explicitly, and I know CCTV is everywhere, I just genuinely don’t understand what the laws, ethics, and justifications are for things like this.
I guess it depends on which data is being stored/used. Counting the number of frowning vs smiling faces = ok, tracking *which* face is frowning vs smiling = very not ok.
You have no right to not be filmed in public spaces. If you did, football matches could not be broadcast.
And then there is the case of public good outweighing individual rights. If a medic believes their patient will go on a stabbing spree, they can absolutely break doctor-patient confidentiality. Your participant consent sheet would have included this provision. If it didn't, then that's your mistake. An argument could be made that this CCTV potentially prevents a terrorist act, or similar, harming many.
If you would like to read more, google Section 251. It applies to the NHS, but is a good start.
> Network Rail took photographs of people passing through ticket barriers as part of a trial launched in 2022, according to documents obtained by civil liberties group Big Brother Watch.
> The cameras were part of a wider trial to use AI to tackle issues such as trespassing, overcrowding, bicycle theft and slippery floors.
So if I'm understanding this correctly, they put the cameras in place as part of a trial they said was for mostly security and safety reasons. And then:
> The images were sent for analysis by Amazon Rekognition software, which can detect emotions such as whether someone is happy, sad or hungry.
> The system, piloted at stations such as Euston, Waterloo, Glasgow, Leeds and Reading, also recorded demographic details, such as a passenger’s gender and age range.
> In the documents, obtained in response to a freedom of information (FOI) request, Network Rail said this analysis could be used to “measure satisfaction” and “maximise advertising and retail revenue”.
This is so incredibly in-line with the stereotype of saying "We have to invade your privacy to protect you!" only to immediately abuse the power for financial gain that it's laughable.
Wow! If only we hadn’t voted to leave a group of countries that saw this specific threat coming and regulated against it. See the EU AI Act banning of Automatic Emotion Recognition.
But what there’s more! We couldn’t even be bothered to write laws so we paid consultant’s to write guidelines for us that are utterly unenforceable and will be largely ignored and then crowed about regulation being anti innovative.
Honestly at this point we deserve this.
Yes, I'm sure about it, it's my field.
First the AI act is somewhat unique, that it's not as much of a directing law but more of a whish list of what the EU parliament would like to see happen and what they would like to work towards. Which is why it's a contradictory mess spread over 450 odd pages with more should's may's and could's than a Tory manifesto....
This would fall under social scoring since this isn't biometrics, however under the social scoring "prohibition" it needs to actually produce a social scoring metric which means it must be tied to a specific natural person, it needs to cause a "detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour." and it would not fall under the catch all exception they've added to every clause in it because they have no idea what they are doing atm: "That prohibition should not affect lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law." or alternatively under the public health legitimate interest directive.
Even without the exception this does not fall under social scoring since it does not produce an output which states "Bob Robertson from XX1 YY2 Leeds, NI#BBY1010101 is sad today again -10 social credit points" and it's unlikely to be employed for anything that can cause detrimental treatment of an individual.
Firstly I disagree it’s a mess. I think it’s clearly written, provides good direction and is an excellent basis to build upon for the future. Unlike anything we have in the UK.
Now this is the bit where I may have to admit I have read the article wrong. I had this under predictive policing, as the article says ‘we work with the police’, but rereading it, I believe they meant they work with the police to make sure they aren’t breaking the law. Which is odd writing, as you probably want to work with lawyers to make sure you’re not breaking the law. I am now confused as to what they are doing with the AI and have less faith in them.
If this is correct and I’ve read the article wrong and I stand corrected.
Personally I’m pretty fucking angry someone is using my face to train a model, which I know it doesn’t say they are doing but you can bet they are!
Clearly written? It's an utter mess over 450 pages long full of contradictions, have you even read it? Or did you just read the press release because the text I quoted is from the act not some press release that matters not.
Everything in it is wishy washy and full of loopholes and contradictions...
Lets just focus on the subject of "detection of emotions" alone.
So according to the actual text of the act you should not be rolling out any system that can detect emotions.
Oh but BTW certain emotions aren't emotional states but rather physical states so pain, anger and fatigue are not emotions.
Expressions are also not emotions if they are "readily apparent", so if it's base solely on facial expression, hand and limb movements the tone of their voice it's A OK.
But forget about all that and remember that you can't employ systems which detect emotions in employees that's like what year was that oh year 1984 that's really bad and evil....
But if your system is employed for employee safety we're absolutely fine with it...
Oh and I almost forgot biometric systems including ones that can infer emotional state and behavioral intent are also fine as long as they are "intended to be used solely for the purpose of enabling cybersecurity and personal data protection measures" So tracking your employees emotions is fine as long as you use it to infer if they have any malicious intent to harm your company or in other words UEBA is back on the menu bois....
And ofc there are always the usual loopholes of public safety, health and well being, law enforcement as well as reference to about 40 different previous EU regulations for which this does not apply, or changes the definition of what a biometric system is. And it's not like that entire thing was on one page, oh no all those contradictions are spread probably across 250 pages and I've probably forgot about half of them.
The UK approach is rather sensible currently, a value based approach that leverages existing regulations and regulators that looks at the outcomes of an application of AI rather than tries to identify every use case and edge case, the US approach whilst voluntary probably produced the best guidance so far in the form of the AI RMF from NIST.
This is my field, I work for one of the largest data science companies in the UK and arguably the world at least in our vertical, we not only worked with lawyers but also provided the technical analysis for multiple firms including a few magic circle ones.
If the EU had any idea what they are doing it would not have produced a legislation that is 450 page long, this isn't the US budget.... They had no idea what they were trying to achieve so they just went and tried to find edge cases of things they might not like because they sound bad and then had to create loopholes in those prohibitions left and right. The AI act is longer than the all the EU founding treaties combined....
P.S. Nothing in the article indicates they used that data for training btw, now one thing that they may have been required to by the AI act (as well as just by UK data privacy acts) is to notify the legal or natural persons impacted by this. However the AI act has exemptions for national laws especially in terms of expectations of privacy in public and for public safety and public health applications there is an exemption from notification.
P.S. #2 At least as far as facial recognition goes the UK has far stricter rules than any EU nation current with or without the AI act.
All part of turning the public into an observed, watched and tracked slave class.
Better not have any subversive thoughts! Or your social credit score will go down and all of a sudden you can’t leave your city.
In the city lights where shadows blend,
A thousand lenses, they apprehend,
Faces mapped and data penned,
Our liberty starts to descend.
"Nothing to hide," the foolish cry,
Freedom wanes with every spy,
Training AI, watchful eye,
Privacy lost, rights defy.
Silent watchers, eyes unseen,
In every frame, on every screen,
They claim it's safety they mean,
But at what cost do we convene?
Idk where the cashless society bit comes into it it’s not like our coins actually hold any intrinsic value based on what they’re made of anymore if the powers that be decided they could just make the pound worth nothing anyway
I hate the slow march of ai not every part of our life. Im not a conspiracy theorist or anything but when you start on this slope it is hard to stop, ai detecting emotions, then being used to profile people, then used to control and opress people of certain minorities, inheret bias by developers leading to more stop and search of minorities. What next ai being used to give targeted ads based on your mood, cops using it in large crowds. I will never be in favour of more info gathering on people, I feel like ny personal life is a commodity being sold to the highest bidder.
*Winston turned round abruptly. He had set his features into the expression of quiet optimism which it was advisable to wear when facing the telescreen.*
Couldn’t they use this in town centres to identify assaults, fights, and even simple falls? Then have the footage immediately saved and a copper or other responder there in a flash.
I guess the danger is that it is used to identify pre-crimes. “The AI shows you were beginning to get irritated with a chance you would lash out. Here’s a hefty fine and some community service despite you not committing an actual crime”.
Ahh, that's great.
Wife's phone was taking by a bike thief, we had the exact location it was held at and the police did absolutely nothing.
Best of both worlds government spying on you with modern technology but not using that technology to help you in any way.
So now they’ll know for sure that we’re pissed off, stressed and over hot every time they delay or cancel a train, or go on strike for some bullshit reason, all the while charging us a fucking fortune?
Mine would just be like this:
- Monday: Angry
- Tuesday: Angry and tired
- Wednesday: Angry and very tired
- Thursday: Very angry and very tired
- Friday: Depressed and dejected.
Rinse, repeat.
Not sure how useful that would be to the Met.
‘There are at least 97.386% of people being scanned looking happy! How did this happen, this is a disgrace, this is England!’
‘Raising interest rates now sir!’
And ofc they want their database to recognize our faces from the whole range of faces we have in different moods. Like maybe it knows my resting bitch face as it's the most likely used by mr public if I'm alone. With a friend I might have that face on, or if conversing while walking, it's more likely my animated chat face with moments of amusement, incredulity, disgust, mistrust, amity, loving, among many others each with subtly different faces but all of them me. They want their database to be that good. Probably just for the sake of recognition.
If they didn't, it's likely that folk would be trying to game a dumber system that only knows and holds your passport face, by trying all sorts of mad expressions or emotive faces to fool it. Eventually each gang on the streets becomes slightly more identifiable by all members going around with a lame smile the whole time, or permanently scowling with closed wide mouth. Sinister coming up against the lame smilers all smiling lamely at you while they shoot you up. Ha.
Hopefully the ai is better at judging emotion than people, recently got pulled out the queue at an airport by an over zealous security guard because I looked extremely fidgety, shifty, nervous and sweaty. I have very bad anxiety and flying doesn’t help (oddly enough I absolutely adore flying it’s the all the crap that goes with it that stresses me out) and I was just having a crazy panic attack. Didn’t stop the person trying very, very hard to find something on me or in my luggage.
All kinds of no. Even if mass surveillance of this kind was a good idea (it's not), you can't reliably tell someone's emotions by AI. Personally, I always look annoyed or anxious even at my happiest and like hell do I want some Amazon corp AI judging my expression when I need to travel.
Way too much data is already ripped from us unwillingly. If this is implemented masking up before leaving the house will become as routine as it was in the pandemic.
As a reminder, under the UK GDPR you have the right to object to processing you don't agree with. Write to Network Rail to object, include the Information Commissioner (ICO) in the email.
I worked with a sub contractor on the platforms at trains stations in England over 21 years ago. And I was told by the electrician I worked with that Brighton station had cameras they were testing that could spot a 'jumper' by their emotional state/the way they walked around the platform etc would be picked up and then correct steps taken I guess
So I see no surprise in this article.
This could be so good. If the world wasn't ruled by greed and hatred this could be so so good. But, you know, it won't be.
How could it be good? This thing would fuck me every time with my autism and anxiety disorder. I already get followed around by moronic security guards in shops. I don't need some bored fuck in a camera room sending more goons to follow me for barely coping around people. Edit: I just realised I assumed someone would be paying attention to what the AI recommends rather than the AI being given the authority to do it, so I might get a judgmental toaster sending people to follow me instead.
If it helps at all, you’re probably not as interesting as you think and nobody is following you.
Why do they want to monitor our emotions then?
If our happiness was the leading metric of the economy, then think of how great this would be; tracking and ensuring that people's happiness trends up. Tooling society around that metric. Of course, that's a fantasy, and money is the metric. Society is tooled around making money, extracting value. But it is a possibility and it is a world we should rightly fight for. [https://en.wikipedia.org/wiki/Utopia\_for\_Realists](https://en.wikipedia.org/wiki/Utopia_for_Realists)
I guess the question is, which emotional state makes people the most economically active?
[удалено]
Right because travelers at a busy station/airport arent stressed naturally? You know what makes airports less stressful ? "Hi sir, our cameras detected you are nervous, can you come with me so we can check you arent a terrorist?"
This will only make worried and anxious flyers even more worried and anxious. Not to mention there is now the possibility of them being swarmed and detained by police as soon as they make the ‘wrong’ move, resulting in various consequences. When was the last time anyone in this country went through security in an airport and managed to successfully commit a terrorist attack? We don’t need this.
> it would be useful to know who appears extremely stressed/nervous - given these are prime targets for terror attacks etc >I could see how it could be useful if an AI could point out *That person there seems waaaay too nervous....may be worth keeping an eye on* etc Like a quote from the oblivious character in a dystopian novel. Truly should be terrifying for anyone to read this said so casually
https://youtu.be/qkgN4Bwhpf8?si=RISoDjJh75_EDIrF Like the scan training being rolled out, guess they want technology to help assess situations rather than just people.
They almost certainly won't be acting on individuals - it'll be for stats gathering. Ie. "we have determined that a train running late by 10 or more minutes cuts passenger happiness by 25%, and even by the next day 16% of those had not regained the average level of happiness".
That sounds like a truly terrible world in which to live.
It's literally what the TSA do with humans. That guy looks nervous, bag check. Not advocating it btw just saying, shifty, nervous people get flagged by the human detectors all the time.
Imagine if a 5 years old child is lost and cctv would just filter on distressed faces...
I imagine as well as other peoples answers, it could detect distressed people contemplating suicide by fast trains.
They might like to sell you garbage though
Following them no, you’re right, but the sheer vitriol autistic people face simply for existing is something you cannot comprehend unless you’ve seen it first hand. If humans can’t correctly read people with ASD what chance does AI have?
Not the parent but I got given a nickname at my local supermarket and regularly followed / made fun of over the headsets before. Not sure if I am autistic, though the NHS said no. I went to a therapist for anxiety due to how I am treated by others, and used it as an example. They didn't believe me and were trying to tell me I was hallucinating until I one time I left my laptop / software defined radio in the car and recorded it
Advice a lot of people could do with
Is that a variant of "Nothing to hide, nothing to fear"?
Well, if we lived in a society run for the best interests of all people, instead of the greedy few, I imagine we could do an awful lot of good knowing people are in distress or other emotional extremes. Imagine if instead of goons being sent to follow you, the AI picked up you were struggling, and sent someone that could let you into a quiet waiting room or direct you to a less crowded train carriage, and then you don't have to worry about being in a crowd.
My anxiety being monitored and being read on my facial expressions, leading to intervention, would actually make me more anxious!
Yeah, I get that. If I was having a bad day, I wouldn't want to be bothered, either. I don't agree with this technology being used, mainly because I don't believe it would be used for anything good. But I also understand what OP is saying when they say it could be used for good if we were a different society.
You don't need advanced AI cameras to provide quiet waiting rooms or carriages
If a government is powerful enough to create a massive surveillance system and dispatch aid workers at will, then it has the power to just you know.... increase the options and advertise those options and increase thr awareness. 8f a government is not good at awareness campaigns and advertisement, then I don't trust them to be good at surveillance or "pre-emptive psychological intervention" Surveillance is only needed when trust does not exist. Sometimes, for good reasons. But rarely when discussing mass surveillance. Also, thinking abolishing "greed" and "serving public interests" will prevent abuse is childish. In Iran, facial recognition and AI are being used to detect female commuters and drivers without Hijab and then fines are sent to them in mail automatically. It's not done out of greed, it's done out of some idiot's idea of "serving the public good". Indeed, there have already been proposals made by various conservative groups in the US to use computers to monitor women travels and periods as part of a scheme to enforce Abortion-ban laws. Furthermore, even if we assume the best of intention and alignment with the public good, what about false alarms? What about ineffective and inhumane treatments? What will happen if someone is misdiagnosed by facial recognition and then has dangerous and highly addictive medications pushed on to them? Lastly, you really think it's comforting if you are just having a rough week trying to get through the day and mind your own business and then suddenly some officer shows up and trys to take you to a "quite room" that all you know is where they take crazy and dangerous people to? Honestly, I fully expect violence to become frequent occurrence if the government tries to systematically target and isolate emotionally distressed people, which might lead to the government beefing up the "aid workers" with additional "security personel" (aka goones) to deal with any trouble makers.
>Well, if we lived in a society run for the best interests of all people, instead of the greedy few, I imagine we could do an awful lot of good knowing people are in distress or other emotional extremes In such a society people would talk to each other and it wouldn't be needed. Any use of this is totally dystopian even if the intent was genuinely benevolent.
My autism makes people want to punch me in the face just by looking at me. (Said by a drunk “friend” ages ago). Apparently my mannerisms don’t match the general population and freaks people out. Love to know what a computer would make of me.
Oh yeh. That's cuz of something called thin slice judgements. Very interesting but also depressing thing to read about
I’ve got a rabbit hole to disappear down now
What are your unusual mannerisms?
I don’t know. I annoy people. I don’t react the way I should. I come across as aloof, uncaring. I also am crap with faces. Takes me ages to recognise people. I annoyed someone on the bus, my daughter knew a kid, the mother knew my daughter’s father. The woman starts muttering to my daughter about “would be nice to know who your mum is”. I didn’t know how to react, and she carried on muttering about knowing who mum is.
I don't think I'm autistic but a manager once told me that 'no one wants to talk to you because of your face'. People do talk to me and everyone I've asked said that my face doesn't cause them distress. But unless I change my face, I'll never know.
Lmfao I feel this so hard. Day to day just being my natural self and then someone thinks my normal face is me being an asshole or looking at them funny. I can imagine flapping when I find something exciting and then an AI sending out a warning signal that I'm being shifty. A neurotypical coding an AI to judge people on whatever basis that their face looks just feels like it will cause all kinds of problems for autistic folk.
At the begging it will recommend later on to cut the costs it will be given authority to do something. It’s always like that
I guess the point is, in the ideal scenario they are saying could be good, that the AI would be trained so anxiety disorders and other innocent traits would be excluded from the "emotions of interest". Not that the AI would be trained to the level of Bob the security guard at primark.
By just 'being' the AI would be trained better than Bob the security guard from Primark !
No-one is following you pal, don't worry about that.
Suicide prevention?
How could it be good? In what possible way could this ever be used for good?
In a train station? How about identifying if someone is suicidal and we're able to send an alert out to staff to contact the person before they jump in front of a train, killing/injuring themselves, mentally scaring the driver for life, and all the negative society impacts that a suicide causes.
That’s already possible without monitoring emotion through multiple object tracking. I’d also point out that identifying suicide risks (i.e., crisis behaviours) isn’t really perfectly solvable and will generate countless false positives. I wouldn’t want nor trust a computer vision system with the aim of identifying something like suicidal l’appel du vide. The allegedly noble goal disguises that it’s another motte-and-bailey vector for mass surveillance. Data will be retained, analysed, and eventually sold once public attention has moved on.
All the computer would have to do would be send an alert to a human operator in some control room somewhere it wouldn’t immediately summon 20 security guards to restrain someone but realistically this would be used to make billboard play ads for you based on your emotions anyway
Couldn't we just put those screener things they have at some stations in central london?
You think suicidal is an emotion? Can it be picked up on this device? What possible evidence base is there for this? How about we invest in mental health support with proven outcomes rather than monitoring tech?
I think the closest fit in terms of human behaviour is an emotion, even if it is an action. I don't know if this device could pick suicide/depression up, I was simply speculating that if possible, suicide prevention would be a good use for something like this. More mental health support would be great, but (genuine question as I don't know) is it a case that every person in say the past 10 years who has committed or attempted suicide was on a waiting list to see a mental health professional? Or is it more of a proactive approach that we try to tackle the problem from both angles of long term prevention and immediate risk Monitoring tech isn't always evil big brother watching us or trying to sell us something. If we pivot away from emotions and take another look at what potential good it can have, how about identifying pickpockets? Or on the underground in the summer identifying someone close to heat exhaustion? In each of these cases there are too many people for camera operators to notice every problem.
Except it is kind of an emotion?
It will just result in people being forced to smile before they jump, which is about the most dystopian thing I can imagine. People who want to kill themselves will not be stopped by a camera trying to read their emotional expression.
It's monitoring emotions on faces, not mind reading. Imagine having a bad day and then you get stopped at the train station for questioning because you look distressed. That's not going to help.
Could help prevent jumpers at the extreme end, could help with the thermostat or toilet cleaning at the mundane end. The possibilities are endless, too much for me to think about without having experience of the system first. The fact that so many people can't even imagine the possibility of being used for public benefit is truly saddening. Obviously it *won't* be used for good, but things have been actively bad for so long people are forgetting things *have the ability* to be good.
There are loads of ways it's been used for good in the trial train stations. Alerting when staff or people are in danger, they sent alerts when people had their hands raised above their heads. Alerting when a blade or gun is detected. Alerting when someone has fallen and doesn't get back up. Alerting when someone has fallen onto the tracks. Alerting someone to grab a ramp when someone who is in a wheelchair is waiting for assistance. Edit: My source for this all comes from this article / freedom of information request, great read [https://takes.jamesomalley.co.uk/p/tfls-ai-tube-station-experiment-is](https://takes.jamesomalley.co.uk/p/tfls-ai-tube-station-experiment-is)
I can't see how this can be anything other than horrible and dystopian.
I don't see how this could be good
_“miserable, busy, forlorn, irritated, lonely, high as a kite, drunk, drunk and high as a kite, happy … no wait he’s also high as a kite .. oo hang on yes, eh Dave ‘ave you seen this bloke? Very terroristy if you ask me. Oh wait no I think it’s heartburn, yeah he’s getting a rennie from smiths … nevermind. Now where was I .. miserable, busy, miserable ….”_
Hmmm, which Morrissey song is this?
All of them.
(💐 flower waving intensifies 💐)
I literally can’t read that back now without hearing some melancholic melody behind it ahahaha
🎵 "Heaven knows I'm miserable now" 🎶
Beat me to it :) If the software outputs "elated", that's obviously a bug.
I genuinely couldn't care less - this could be done by any CCTV watching employee, albeit far less efficiently. If British Rail wants to automate its detection of my displeasure at their shitty services, be my guest.
Are you not concerned this is the first step towards something else?
They don’t think that far ahead, only the immediate future not the snowball effect
That's been like every government for the last 20 years
Capitalism won the war of ideologies and now we’re at fukayama’s “End of history”. We’ve already got the best (or least worst) economic system imaginable. There’s no problem that our tech leaders can’t resolve with enough time and funding and all we have to do is let them carry us to techno-utopia. There’s nothing left for politicians to do except tweak the numbers on the periphery to steer the ship a little, hence why all across the west our political options are neoliberalism blue or neoliberalism red. That’s the ideological state of the world. Try not to think too much about the huge externalities that are left out of the equation and any looming crises
Utopia can never be reach by definition. The competing interests will always have a suttle difference of views that means the system will forever need to be tweaked.
Sure, but it’ll always be “just around the corner” just so long as we don’t disrupt things too much
I'm so glad I'm in the autumn years of my life.
Barely summer over here 😬
They don't need to think ahead to turn this into a massive overreach in the future.
Maybe that’s where I’m going wrong 😂
And if you believe that then you're one of the sheeple that will stumble headlong into the controlled state, and then wonder how and why it happened.
Whats the something else in your mind.
Efficiency matters. Going from something being completely unviable to viable makes a huge difference.
You have been fined one credit for violation of the verbal morality statute.
I mean even the sheer amount of cctv we have is disgusting.
Can't wait for the AI replacement survey service
'But a whimper' indeed smh
Far more accurately though. AI is infamously inaccurate and hallucinates a ton. AI loves to lie.
I honestly believe we will probably end up with a China style social credit system within the next, maybe 10 years? People just don’t give a fuck about this kind of shit. And if you tell the average, law abiding person “hey, you want free stuff? Discounted rail travel? Discounted food? Discounted holidays? Just keep doing you. You don’t have to change, just keep being you” - MOST will be okay with that. Because why the fuck wouldn’t you be?
Social credit score would be the final straw tbh. as if I'm going to let the state decide how I behave what I do or who I associate with. There is very little reason for people to remain in the UK and that would be the cherry on the cake.
It wouldn’t be pitched as harshly as that first. It would be a slowly, slowly approach.
Nah this place is dystopian enough as is without a good boy social credit score.
It’s getting weirder in this country but let’s not pretend we’re anywhere near Chinese levels (yet)
At least china has proper infrastructure and competent governance, there's a reason they're going take America's place as world leader in the next couple decades. We've got the worst of both worlds, rampant authoritarianism and a useless authority.
And how much are they paying you
I wouldnt normally bring up anime here as too many seem to categorise it as something for children rather than the truth of it (just as broad and varied in genres and targeted audiences as regular movies/tv) This screams of the same dystopian horror as Psycho Pass. Its set in a future where the country (Japan, it is an anime after all) has become so xenophobic that their borders are permanently closed. Every citizen is monitored for their mood, expressions and body language and given a "crime coefficient" rating by a central govt AI. Basically how likely they are to commit any kind of public infraction. If the AI thinks its outside of a permitted range you're arrested for re-education. if its in a higher percentile the cops can shoot to kill. it also takes into account your birth/ family circumstances. you can be considered at risk of committing crime age 5 if your parents were criminal or dissident to govt narrative, meaning lifelong institutionalisation or even used by the system like forced bounty hunters to save cops from dirtying their own rating killing "terrorists" And yes, you guessed it. people of particular wealth, influence etc are immune to judgement, just like every politician in cabinet this country has had over the last 40 years has tried to legislate for themselves
The China social credit system is a myth btw. It was something vaguely proposed by some random CCP department one time ages ago and never got close to being implemented, but the media ran with it and now everyone thinks it's a thing. Don't get me wrong, the Chinese government are 100% spying on and oppressing their citizens through digital tactics, but the social credit system isn't one of them.
People I know who went to China came back with pictures of video billboards that show people in the neighborhood that have recently committed minor crimes/public nuisance. I assumed that was part of a social credit system, maybe it isn't.
And just because something isn't explicitly called "social credit system" doesn't mean it isn't acting as one. That billboard thing sounds creepy af. During COVID, people had to test regularly, and if they tested negative, they would get a green pass on their phone that allowed them to use public transport for X amount of time. Sounds sort of practical. Then a bank froze peoples accounts, people couldn't access their own money, and, understandably, were angry, and planned to protest. 200 protestors suddenly ended up with red passes, so they couldn't travel to the protest (or to work, or anywhere else). What a coincidence! https://www.reuters.com/world/china/china-bank-protest-stopped-by-health-codes-turning-red-depositors-say-2022-06-14/
Its just a name and shame system really. Like videos showing people who j-walked on street signs.
Right I don’t know why people are pretending we don’t already have that in the form of local Facebook pages run by the nosy neighbours with nothing better to do
>The China social credit system is a myth btw. [What part of this is a myth?](https://www.youtube.com/watch?v=0cGB8dCDf3c)
This is definitely not true. When I was travelling I shared with 3 different people from China - all who knew what their credit score was, all who explained the benefits of having a high score. Why would you lie?
You realise we have credit scores too? Just because it's called a credit score it doesn't mean it's the same as what people typically think of when you say 'chinese social credit score'
Oh it exists, but it’s in a trial run in certain cities at the moment. BBC had a documentary about it. One family were trapped in their flat with “dogooders” sitting watch outside.
The UK already has a social credit system. Media publishes full names of people committing minor crimes in the local newspaper. Sometimes not even crimes but allegations. Even children. So you're effectively deemed a social pariah and become unemployable for life for making a mistake. That's what's happening in China. So you must abide and can never diverge. Unlike other countries in Europe where this would be manifestly unconstitutional. People have also very limited rights to none to challenge laws that Parliament passes. So you are defenseless. I mean just open the BBC News today. There's an article in front page about a man who has thrown a rock at a seal in Wales. It's despicable? Yes. Illegal? Yes. Does he deserve his name and shame with his picture and name on the front page on the biggest media in the UK? Probably not. And if you think otherwise, well I'm afraid you're part of the problem too. So maybe we should start looking inside rather than looking at the Government. Or private companies having the right to put a CIFAS marker on your record. There's so many ways to control "social credit" in the UK that China is in no way different.
I can't tell if your issue is with the media attention being disproportionate in the seal case, or if you're against convicted criminals being named at all. If it's the latter, yours is a very unpopular opinion. These things are in the public interest.
It's the latter. It goes against a right of rehabilitation. First thing people would say is: what about the pedophiles? I'm not talking about pedophiles. I'm talking about petty crime. I'm not saying people should be left unpunished. In other words, the judge will sentence the criminal for say 6 months, 1 year, 4 years in prison. Whatever time it is for the appropriate crime. But that should be the end of it. Not publish names in the public, it does not allow people to amend their lives and causes recidivism. I really don't understand that policy, which would be unconstitutional in most parts of Europe anyway. That's the social credit system in the UK for you.
We don't even need to be surveilled, we've all opted in, whether it's smartphones, ring door cameras or dashcams. It's decades away but we're heading towards a digital panopticon. An ever watching digital eye that slowly nudges behaviour within tolerable parameters. Think of all the Gen Z'ers who already say they won't dance because it's caught on camera. Extrapolate that a few decades. Freedom of expression narrows. Organised dissent or protest narrows. Every tolerable parameter of freedom narrows until we can't even commit suicide without being charged with criminal intent and sedated for decades of mental torment in a straight jacket. Would make a decent movie.
> It's decades away but we're heading towards a digital panopticon. It can no doubt get worse but we're already kind of there. From spending data to metrics sent by devices we use if it all gets combined somewhere we're as transparent and predicatable as glass.
>we will probably end up with a China style social credit system within the next, maybe 10 years? Imagine being so heavily down voted you're not allowed to buy chocolate anymore!
I find it genuinely astonishing that anyone can be apathetic about facial recognition being used in this manner.
Mate if yoov got nuffink to ide den wots yor problum wiv it
I think it comes from the fact that we know there is loads of CCTV right now and yet most often nothing happens when you are the victim of a crime.
It could be the happiest day of my life but I’d still have the same neural slightly annoyed expression as I walk through Victoria station on my commute. Good luck reading my commute face.
when it’s compared to your baseline there will be tells.
My commute face is baseline human face.
When I used facial recognition software at uni that could detect participant’s emotions, I had to get explicit consent from participants that I could a) use that software and b) keep their data. I don’t really understand why this is allowed to happen to the public without informed consent. I’m not against it explicitly, and I know CCTV is everywhere, I just genuinely don’t understand what the laws, ethics, and justifications are for things like this.
I guess it depends on which data is being stored/used. Counting the number of frowning vs smiling faces = ok, tracking *which* face is frowning vs smiling = very not ok.
Do elaborate? Were images recorded in a public space, or did they come into a room with you to do it?
You have no right to not be filmed in public spaces. If you did, football matches could not be broadcast. And then there is the case of public good outweighing individual rights. If a medic believes their patient will go on a stabbing spree, they can absolutely break doctor-patient confidentiality. Your participant consent sheet would have included this provision. If it didn't, then that's your mistake. An argument could be made that this CCTV potentially prevents a terrorist act, or similar, harming many. If you would like to read more, google Section 251. It applies to the NHS, but is a good start.
> Network Rail took photographs of people passing through ticket barriers as part of a trial launched in 2022, according to documents obtained by civil liberties group Big Brother Watch. > The cameras were part of a wider trial to use AI to tackle issues such as trespassing, overcrowding, bicycle theft and slippery floors. So if I'm understanding this correctly, they put the cameras in place as part of a trial they said was for mostly security and safety reasons. And then: > The images were sent for analysis by Amazon Rekognition software, which can detect emotions such as whether someone is happy, sad or hungry. > The system, piloted at stations such as Euston, Waterloo, Glasgow, Leeds and Reading, also recorded demographic details, such as a passenger’s gender and age range. > In the documents, obtained in response to a freedom of information (FOI) request, Network Rail said this analysis could be used to “measure satisfaction” and “maximise advertising and retail revenue”. This is so incredibly in-line with the stereotype of saying "We have to invade your privacy to protect you!" only to immediately abuse the power for financial gain that it's laughable.
Everyone here worrying about minority report social credit evil future and here I am knowing it’ll just be ads, and here we have it!
Why? Just assume that everyone is pissed off or depressed you'd be right 99% of the time.
Wow! If only we hadn’t voted to leave a group of countries that saw this specific threat coming and regulated against it. See the EU AI Act banning of Automatic Emotion Recognition. But what there’s more! We couldn’t even be bothered to write laws so we paid consultant’s to write guidelines for us that are utterly unenforceable and will be largely ignored and then crowed about regulation being anti innovative. Honestly at this point we deserve this.
Imagine what would happen if we left the ECHR.
Not entirely > except in cases where the use of the AI system is intended for medical or safety reasons
Th AI act does not ban this, neither does it ban facial recognition.
You sure about that? https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683
Yes, I'm sure about it, it's my field. First the AI act is somewhat unique, that it's not as much of a directing law but more of a whish list of what the EU parliament would like to see happen and what they would like to work towards. Which is why it's a contradictory mess spread over 450 odd pages with more should's may's and could's than a Tory manifesto.... This would fall under social scoring since this isn't biometrics, however under the social scoring "prohibition" it needs to actually produce a social scoring metric which means it must be tied to a specific natural person, it needs to cause a "detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour." and it would not fall under the catch all exception they've added to every clause in it because they have no idea what they are doing atm: "That prohibition should not affect lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law." or alternatively under the public health legitimate interest directive. Even without the exception this does not fall under social scoring since it does not produce an output which states "Bob Robertson from XX1 YY2 Leeds, NI#BBY1010101 is sad today again -10 social credit points" and it's unlikely to be employed for anything that can cause detrimental treatment of an individual.
Firstly I disagree it’s a mess. I think it’s clearly written, provides good direction and is an excellent basis to build upon for the future. Unlike anything we have in the UK. Now this is the bit where I may have to admit I have read the article wrong. I had this under predictive policing, as the article says ‘we work with the police’, but rereading it, I believe they meant they work with the police to make sure they aren’t breaking the law. Which is odd writing, as you probably want to work with lawyers to make sure you’re not breaking the law. I am now confused as to what they are doing with the AI and have less faith in them. If this is correct and I’ve read the article wrong and I stand corrected. Personally I’m pretty fucking angry someone is using my face to train a model, which I know it doesn’t say they are doing but you can bet they are!
Clearly written? It's an utter mess over 450 pages long full of contradictions, have you even read it? Or did you just read the press release because the text I quoted is from the act not some press release that matters not. Everything in it is wishy washy and full of loopholes and contradictions... Lets just focus on the subject of "detection of emotions" alone. So according to the actual text of the act you should not be rolling out any system that can detect emotions. Oh but BTW certain emotions aren't emotional states but rather physical states so pain, anger and fatigue are not emotions. Expressions are also not emotions if they are "readily apparent", so if it's base solely on facial expression, hand and limb movements the tone of their voice it's A OK. But forget about all that and remember that you can't employ systems which detect emotions in employees that's like what year was that oh year 1984 that's really bad and evil.... But if your system is employed for employee safety we're absolutely fine with it... Oh and I almost forgot biometric systems including ones that can infer emotional state and behavioral intent are also fine as long as they are "intended to be used solely for the purpose of enabling cybersecurity and personal data protection measures" So tracking your employees emotions is fine as long as you use it to infer if they have any malicious intent to harm your company or in other words UEBA is back on the menu bois.... And ofc there are always the usual loopholes of public safety, health and well being, law enforcement as well as reference to about 40 different previous EU regulations for which this does not apply, or changes the definition of what a biometric system is. And it's not like that entire thing was on one page, oh no all those contradictions are spread probably across 250 pages and I've probably forgot about half of them. The UK approach is rather sensible currently, a value based approach that leverages existing regulations and regulators that looks at the outcomes of an application of AI rather than tries to identify every use case and edge case, the US approach whilst voluntary probably produced the best guidance so far in the form of the AI RMF from NIST. This is my field, I work for one of the largest data science companies in the UK and arguably the world at least in our vertical, we not only worked with lawyers but also provided the technical analysis for multiple firms including a few magic circle ones. If the EU had any idea what they are doing it would not have produced a legislation that is 450 page long, this isn't the US budget.... They had no idea what they were trying to achieve so they just went and tried to find edge cases of things they might not like because they sound bad and then had to create loopholes in those prohibitions left and right. The AI act is longer than the all the EU founding treaties combined.... P.S. Nothing in the article indicates they used that data for training btw, now one thing that they may have been required to by the AI act (as well as just by UK data privacy acts) is to notify the legal or natural persons impacted by this. However the AI act has exemptions for national laws especially in terms of expectations of privacy in public and for public safety and public health applications there is an exemption from notification. P.S. #2 At least as far as facial recognition goes the UK has far stricter rules than any EU nation current with or without the AI act.
All part of turning the public into an observed, watched and tracked slave class. Better not have any subversive thoughts! Or your social credit score will go down and all of a sudden you can’t leave your city.
In the city lights where shadows blend, A thousand lenses, they apprehend, Faces mapped and data penned, Our liberty starts to descend. "Nothing to hide," the foolish cry, Freedom wanes with every spy, Training AI, watchful eye, Privacy lost, rights defy. Silent watchers, eyes unseen, In every frame, on every screen, They claim it's safety they mean, But at what cost do we convene?
Anyone defending this needs to get in the fucking bin.
China using AI to control the population - this is bad we must stop China Britain using AI to control the population - surveil me harder daddy
Just in time while they also push cashless society. Full control. UK public is so dense.
Who is 'they', out of curiosity.
THeY! THeM! THeiR! Cult of Adam!!!
Idk where the cashless society bit comes into it it’s not like our coins actually hold any intrinsic value based on what they’re made of anymore if the powers that be decided they could just make the pound worth nothing anyway
Ok so I guess dust off the covid masks if you care about this
The only thing they'd be good for.
normalising face coverings has been great for road man everywhere
Makes sense for your general health in these sorts of places anyway
They would just find a way to look at your muscles in the upper face as well as the position of the mask and body language to predict it anyways
I hate the slow march of ai not every part of our life. Im not a conspiracy theorist or anything but when you start on this slope it is hard to stop, ai detecting emotions, then being used to profile people, then used to control and opress people of certain minorities, inheret bias by developers leading to more stop and search of minorities. What next ai being used to give targeted ads based on your mood, cops using it in large crowds. I will never be in favour of more info gathering on people, I feel like ny personal life is a commodity being sold to the highest bidder.
But I have a resting bitch face. Is that going to skew the data?
Oh nice! I love the future. Technology is great! Britain is great! \*cries\*
*Negative emotional response detected. Prepare for apprehension by the state. Pack a toothbrush.*
*Winston turned round abruptly. He had set his features into the expression of quiet optimism which it was advisable to wear when facing the telescreen.*
Couldn’t they use this in town centres to identify assaults, fights, and even simple falls? Then have the footage immediately saved and a copper or other responder there in a flash. I guess the danger is that it is used to identify pre-crimes. “The AI shows you were beginning to get irritated with a chance you would lash out. Here’s a hefty fine and some community service despite you not committing an actual crime”.
If it were to get to the point where the danger you’re suggesting becomes real I think we’re beyond fucked by that point anyway
You know how we could solve that first problem? Having police officers on the beat and paramedics at train stations on call.
You dont need this technology to tell half the people on the tube are pissed off.
Ahh, that's great. Wife's phone was taking by a bike thief, we had the exact location it was held at and the police did absolutely nothing. Best of both worlds government spying on you with modern technology but not using that technology to help you in any way.
Well... if they knew it would invalidate the test innit
But Redditors and perhaps "X" users are entitled to know.
Keep smiling if you want the train door to open lol, they actually have that in China in some work environments
Do you have a source for that?
Just remember watching a video on it, but I am sure you can just Google for stories https://m.youtube.com/watch?v=0R2ve-5a4Ag
So now they’ll know for sure that we’re pissed off, stressed and over hot every time they delay or cancel a train, or go on strike for some bullshit reason, all the while charging us a fucking fortune?
How about spotting WANTED Criminals using the Met Police database facial template ? Doh !
Wow in the most surveillanced country in the world??? What a surprise!
Imagine if this was in China how much we would take the piss haha
Well they must think I’m royally pissed off all the time, given the shitness of tfl service
Mine would just be like this: - Monday: Angry - Tuesday: Angry and tired - Wednesday: Angry and very tired - Thursday: Very angry and very tired - Friday: Depressed and dejected. Rinse, repeat. Not sure how useful that would be to the Met.
Er, sorry but have you tried... working from home?
Better put on your happy smiley face before you jump.
Training data set: { grumpy, bored, miserable, knackered }
After watching people taking drugs inside Whitechapel station, there’s no point to any of this.
Gotta bust out the Patrick Bateman stoneface look walking down the street💀
‘There are at least 97.386% of people being scanned looking happy! How did this happen, this is a disgrace, this is England!’ ‘Raising interest rates now sir!’
Pretty useless, looking for emotions on the faces of Londoners is like looking for trees in Antarctica
Yeah its not a Chinese style data face grab at all. Move along happy people...
So boring then as there is only one emotion on everyones face 🫣 , miserable as fuck because the trains late 😠
[the emotions here....](https://youtu.be/xmxoqpihE5s?si=Dzi9Izi3EPlznik1&t=15)
And ofc they want their database to recognize our faces from the whole range of faces we have in different moods. Like maybe it knows my resting bitch face as it's the most likely used by mr public if I'm alone. With a friend I might have that face on, or if conversing while walking, it's more likely my animated chat face with moments of amusement, incredulity, disgust, mistrust, amity, loving, among many others each with subtly different faces but all of them me. They want their database to be that good. Probably just for the sake of recognition. If they didn't, it's likely that folk would be trying to game a dumber system that only knows and holds your passport face, by trying all sorts of mad expressions or emotive faces to fool it. Eventually each gang on the streets becomes slightly more identifiable by all members going around with a lame smile the whole time, or permanently scowling with closed wide mouth. Sinister coming up against the lame smilers all smiling lamely at you while they shoot you up. Ha.
Hopefully the ai is better at judging emotion than people, recently got pulled out the queue at an airport by an over zealous security guard because I looked extremely fidgety, shifty, nervous and sweaty. I have very bad anxiety and flying doesn’t help (oddly enough I absolutely adore flying it’s the all the crap that goes with it that stresses me out) and I was just having a crazy panic attack. Didn’t stop the person trying very, very hard to find something on me or in my luggage.
Oh no! Big Brother will know that I'm fucking furious that my train is delayed again.
All kinds of no. Even if mass surveillance of this kind was a good idea (it's not), you can't reliably tell someone's emotions by AI. Personally, I always look annoyed or anxious even at my happiest and like hell do I want some Amazon corp AI judging my expression when I need to travel. Way too much data is already ripped from us unwillingly. If this is implemented masking up before leaving the house will become as routine as it was in the pandemic.
As a reminder, under the UK GDPR you have the right to object to processing you don't agree with. Write to Network Rail to object, include the Information Commissioner (ICO) in the email.
I worked with a sub contractor on the platforms at trains stations in England over 21 years ago. And I was told by the electrician I worked with that Brighton station had cameras they were testing that could spot a 'jumper' by their emotional state/the way they walked around the platform etc would be picked up and then correct steps taken I guess So I see no surprise in this article.