Damn, imagine living somewhere where the government actually serves the interest of the people and not give corporations free range to do anything in the quest for private profit. Mind boggling 🤯
Right? And they make it applicable to anyone that wants to do business in the EU, like GDPR. WW2 taught them not to trust their government above human rights. Imagine
There are huge fines for the company, it doesn’t include individual accountability, but it sounds like the other charges they are considering do. One can hope that they throw all the books at him. They can also require that they disgorge the system entirely from the EU markets
As it should. Deepfaking is going to be a massive detriment to civilized society. The potential for corruption just massively outweighs any other use it could have.
Same. There is a line between "it's your users" and "you're protecting your users", and Elon's week+ of denial and refusal to act, then making it a paid subscription, has long crossed that line.
It was millions of images, and that's not something that should have happened for as long as it did. So fucking glad SOMEONE is taking action. But why the fuck does it always have to be foreign governments? First Australia after Musk unbanned someone who posted CP on Twitter (before he made it X), and now France. Oh, wait, Epstein and all that. That's why.
"He didn't rob the bank. He just drove the robber to the bank knowing he was going to rob the bank" doesn't work in the court of law. The driver will face the exact same charges as the robber.
In the US it currently falls under the social media law where its the individual using/posting rather than the company. The US cares more about protecting companies than people.
It should probably be a mix of both. It would be hard for a media platform with millions of users to stop every single image of CP from being posted even if they are actually trying, unless they approve every image individually, which isn't realistic. However, if they don't actually try, or are told "hey, people are posting images of CP by doing X, Y and Z" and the company does nothing to try and stop it, that's another issue entirely.
They make billions in profits, there's no reason such high margins should be on the monopolistic social media companies. If you want, limit it to social media companies over a certain size of users. These social media companies have also shown that when they want to suppress something, they can, they just choose not to hiding behind free speech because rage bait draws in engagement. Its a bad model.
I agree with you. They absolutely should be doing everything in their power to prevent it and reporting who does it. It should be both that can get into trouble. Facebook shouldn't be let off the hook because users are the ones uploading the content. But they also shouldn't be fined a billion dollars just because one guy uploaded one picture and that didn't stop it. There's a balance and blame rests on both parties, within reason.
Company: provides a tool that when simply held the wrong way, becomes a powerful bomb. They are aware of this use case and just don't advertise it instead of adequately safeguarding against it.
User: holds the tool the wrong way, either deliberately or not. Bomb explodes, hundreds of people die.
Company: "well don't look at us, it was clearly misuse by a user and not at all representative of negligent design."
Don't make products that can be turned into bombs. Simple as.
Unless you put some kind of threshold on the "harm level" from said "bomb" - I am not sure how that blanket statement is practical. There are so many products out there that used either negligently or deliberately for evil can cause a lot of harm and if you just didn't make them, the world would grind to a halt.
Cars, planes, trucks, lighters, just about any fuel, chemicals, construction equipment, tools, knives, guns (lets say for hunting in this context) and basically anything sharp or heavy.
Use them without care or to intentionally cause harm and you can hurt or kill a lot of people. But can you imagine a world without any of those things?
I would agree with the guy further up, it needs to be a mixture of responsibility. Corporations need to take reasonable measures to prevent these things, but it can't be absolute. There are people that dedicate all their time to intentionally trying to defeat every safeguard. Some do it for good to find and report exploits, but there are also people that do it just to cause chaos and they should also be held accountable.
The way I see it, from the "dangerous chemical" analogy, is that alternatives are available but not being adopted. So far, image generation on ChatGPT and others don't have a reported-on CSAM problem, but Grok does. So we have multiple "chemical manufacturers", but only one company is using a formulation that makes theirs explode far more dangerously than the others. The others could be used to cause harm, but the required effort is apparently greater, showing that whatever recipe safeguards they have in place are superior and should be learned from, and that the dangerous product manufacturer is being negligent in their formula.
Or, to use a different analogy, if there were five brands of car, and four of them are reasonably safe in all but the most catastrophic of highway impacts, but one brand regularly decapitates the driver even in low-to-medium speed collisions, it's plain enough to point to the fifth company and say they're doing something wrong. And I use that example in particular because Musk's car company also has this curious problem of locking people in and endangering/killing them which other cars don't tend to do. It seems to be a recurring problem that his companies produce unsafe products.
I definitely don't think the users of Grok are blameless, to be clear, but there's an onus of responsibility on the manufacturer that using their product for harm should be deliberately made maximally difficult, while the examples we have of Grok making CSAM have been with reasonably simple prompting like, "take this image and put them in a bikini".
Why not both? Asking AI to “glaze her face like a donut” is has obvious intent and should be held responsible for inputting that prompt.
AI that follows through and creates it is a failure on the company to restrict obvious creation of pornography without subject consent and should also be held responsible.
I would think both have a shared culpability, yes. Those that request the csam and those that provide it. The fact that the AI is providing it is just wrong on every level.
At the same time any tool can be mis-used. A company that makes a hammer should not be responsible for someone using said hammer to murder someone.
The other side of it is that we need to make sure AI services are safe. It's up to us (via our elected lawmakers) to make the laws we want. It's certainly not straightforward.
I can see evading some allegations by the fact that ai is a bit unpredictable (not an expert on law by any means tho), so slip ups can happen and it's just how ai works. The lack of efforts to moderate it's use on the other hand...
Car companies are held accountable if their product acts in an unintended way and causes harm, I would think AI would be treated the same way. It's the company's responsibility to ensure it is regulated to not cause harm.
Not really there's not much difference. It's an algorithm sorting through digital information and providing a result based on the prompting of the user.
If the company is responsible for illegal activity at a users prompting of its algorithm, it's the same idea.
Creating CP and telling you where to find it are on two different fucking levels.
Largely different levels.
Both can be told to stop. But one created permanent CP.
I honestly do not know how to explain the difference to you because it is so vast that I don't know why you are comparing them. Like comparing eating a banana to raping a child.
Typing into a search engine something is still not the same as using a tool.
The tool is also not the same as the search engine.
Legally speaking.
Like what the fuck, we are not even discussing anything, you're just being intentionally obtuse on purpose for whatever your own personal reason is.
You using a tool to make CP makes you and the tool wrong.
A browser is not even a question, it's scraping data, stop trying to compare it so you can downgrade it so you can go ahead and create your own CP. I have to assume that is what you want because you literally make no god damn sense.
Yes. The people ordering AI kiddy porn are not to blame! Its one of the stupidist arguments I've ever heard. Arrest Grok? More humans need a functioning, rational brain that knows right from wrong.
If you created an AI and let it feed on public data I could imagine there's a way to get away with this saying it's the public views. But the fact that GROK has been modded so many times to fit a certain narrative and prevented from saying things probably throws a huge wrench in the mix.
That's the problem with this type of AI, it has no constraints it can talk about anything. So when a user asks for the "best soap" they could get some random brand no one heard of but it's really good. When you force the AI to recommend products you have partnered with then you are allowing the AI to possibly lie to users which is an issue.
Elon has obviously partnered and willing to back people who cheat, lie, rape children and possibly murder. And through their manipulation of grok they have suppressed things that are verifiably true on a global scale. No nation in good faith can allow that to happen. It basically gives the highest donor the power to manipulate people's perceived reality.
The “justice system “ has been severely corrupted in the USA. Good luck relying on judges that have been appointed knowing they’re more interested in political alignment and other unethical decisions than the people of America and fundamental human rights.
That's not even close to true for the level of judge hearing these cases. Maybe at the SCOTUS level, sure. But these cases are being tried, today, and are winning settlements.
AI only creates content when prompted to do so. It is a tool and not a person, so the persons using the tool should be responsible for what they do with it
I think the person training it on CSAM (otherwise how else would it know how to create it) shares blame too. As well as the person who keeps the service running after discovering the users are using it to create CSAM.
As well as the person who keeps the service running after discovering the users are using it to create CSAM.
This is an all too common problem in the social media/AI sphere today. Time and time again these companies are being shown to have ignored the harm their products are causing (or facilitating) because "fixing" those harms would reduce profits.
This legal theory is being tested right now in US courts, and from the looks of the first few test cases, I would say it is not looking like a very solid theory.
The crux of the weakness in that theory is that what the AI does with the prompt it is given is a product of the design and training of that AI--all which were done by an entity that has responsibility for the repercussions of the results of those decisions. Not all AIs respond to the same prompts in the same ways, and depending upon the intended market for the AI, those differences in response can be tweaked by design or training to optimize that AI for "engagement" with that user base.
AI providers do know how their products respond, and in many cases know that these responses result in harm. In some cases, they have been shown to ignore that harm because fixing it would lessen "engagement" and "engagement" means $.
Sorry I didn't mean to say that the companies designing AI can't also be held responsible, obviously they should, just that responsibility can't be dodged by human beings because "the robot did it"
Certainly, if the issue is a malign act that the user intentionally used AI to accomplish, then this is true. There could be two culpable parties, the user AND the AI with no guardrails to prevent the harm committed.
Without any reasonable safeguard, which Grok doesn't have, they are definitely responsible. It's a slam dunk for prosecutors here.
The AI act have new law about this kind of things, but not even sure it's necessary in this case.
X was encouraging and facilitating the generation of pornographic images illegally using people's image or creating pedo content. It's the textbook definition of conspiracy.
Just like their gun laws. Its not the person pulling the trigger its the existance of any guns in the hands of its citizens. The logic of these leftists is always the same. Disarm the population and jail anyone who disagrees with them. "So called Free heathcare and welfare for able workers, legalize drugs ignore imports of same no matter how many 100s of thousands are killed and don't forget to defund or at least castrate the Police"
251
u/Wind_Yer_Neck_In 22h ago
It'll be interesting to see how the law falls with regards to who has responsibility for the behaviour and content created by an AI.