What Happens If Artificial Intelligence Goes Astray, How to Stop It?

0
Artificial intelligence

We’ve seen artificial intelligence (AI) answer simple conversational questions for school assignments, try to detect guns in the New York subway; We were informed. Now we have witnessed a criminal being considered an accomplice in the conviction of a criminal who used deepfake to create child sexual abuse content. Digital security company ESET is advancing artificial intelligence wrong direction while continuing He examined the efforts made to prevent it from leaving and shared the things that need to be taken into consideration.

ESET has been using artificial intelligence in the context of security for years. He shared warnings that artificial intelligence is not a magic bullet, partly because it misunderstands critical things. However, “only occasionally” critical things forward Security software that makes mistakes, by emitting massive false positives that trigger unnecessary effort from security teams, or by exploiting malware that the AI ​​already knows about. It will have a very negative impact by overlooking a malicious attack that looks “different enough”. So to provide checks and balances, ESET layers AI with a range of other technologies. This way, if the AI’s answer resembles a digital hallucination, it can pull it back along with the rest of the technology.

While adversaries haven’t launched many pure AI attacks, phishing in particular, and now phishing to supercharge their social engineering efforts, sound and power from To be more effective at image cloning, it is necessary to think of hostile AI that automates the links in the attack chain. If bad actors can digitally gain trust and trick systems into authenticating using AI-generated data, this will allow them to break into your organization and manually launch specialized exploit tools. It is enough for . To stop this, merchants can use a multi-factor authentication layer. This way, attackers need multiple authentication methods instead of just one voice or password. Although this technology is currently widely used, it is not preferred enough by users. This is a simple way for users to protect themselves without a heavy burden or a large budget.

You may be interested.  Program for Non-Huge Graphics Cards: What is NVIDIA SFF System? - Computex 2024 #55

Is it all the fault of artificial intelligence?

Is artificial intelligence wrong? When asked the reason for the AI’s misunderstanding, people humorously responded “it’s complicated.” However, as AI approaches the ability to cause physical harm and affect the real world, this is no longer a satisfactory and sufficient response. For example, if an AI-supported driverless car crashes, will the “driver” be punished or the manufacturer? This is not an explanation that would satisfy a court, no matter how complex and incomprehensible.

What about privacy? We have seen that GDPR rules prevent technology from becoming brutal when viewed through the lens of privacy. Of course, AI-derived, sliced ​​and diced original works to produce derivatives for profit is contrary to the spirit of privacy and will therefore trigger protective laws. But what exactly needs to happen for AI to be considered derivative? as much as He has to copy it, and what happens if he copies it enough to circumvent the legislation? Also, with insufficient case law that will take years to be better legally tested, who will prove this in court and how? We see newspaper publishers suing Microsoft and OpenAI because they believe articles are being reproduced using high technology without attribution; The outcome of the case is a matter of curiosity, perhaps it will be a harbinger of future legal proceedings.

AI is a tool, and generally a good one, but with great power comes great responsibility. Right now, the accountability of AI providers falls far short of what would happen if our newfound power were to go bad.

You may be interested.  TCL's 2023 ESY Report Has Been Published
Leave A Reply