AI is being used to power scams

Code hidden inside PC motherboards have left millions of machines vulnerable to malicious updates, researchers revealed this week. Staff at security firm Eclypsium found code inside hundreds of motherboard models created by Taiwanese manufacturer Gigabyte that allowed one updater to download and run other software. While the system was intended to keep the motherboard up to date, the researchers found that the mechanism was implemented insecurely, potentially allowing attackers to hijack the backdoor and install malware.

Elsewhere, Moscow-based cybersecurity firm Kaspersky revealed that its staff had been targeted by a recently discovered zero-click malware targeting iPhones. Victims were sent a malicious message, including an attachment, in Apple’s iMessage. The attack automatically began exploiting more vulnerabilities to allow the attackers access to the devices, before the message self-erased. Kaspersky says it believes the attack impacted more people than its own staff. On the same day that Kaspersky revealed the iOS attack, Russia’s Federal Security Service, also known as the FSB, said that thousands of Russians had been targeted by the new iOS malware and blamed the National Security Agency (NSA) of the United States to have conducted the attack. The Russian intelligence agency also said Apple helped the NSA. The FSB hasn’t released any technical details to support its claims, and Apple said it has never put a backdoor into its devices.

If that’s not encouragement enough to keep your devices up-to-date, we’ve rounded up all the security patches released in May. Apple, Google, and Microsoft all released major patches last month, go ahead and make sure you’re up to date.

And there’s more. Every week we round up the security stories we haven’t dug into ourselves. Click on the titles to read the full stories. And stay safe out there.

Lina Khan, chair of the US Federal Trade Commission, warned this week that the agency is seeing criminals using AI tools to boost fraud and scams. The comments, which were made in New York and first reported by Bloomberg, cited examples of voice cloning technology in which artificial intelligence was being used to trick people into thinking they were hearing a family member’s voice.

Recent advances in machine learning have made it possible to mimic human voices with just a few short clips of training data, though experts say AI-generated voice clips can vary widely in quality. In recent months, however, there have been reports of an increase in the number of scam attempts apparently involving generated audio clips. Khan said officials and lawmakers need to be vigilant and that while new laws governing AI are being considered, existing laws still apply to many cases.

In a rare admission of failure, North Korean leaders said the hermit nations’ attempt to put a spy satellite into orbit did not go as planned this week. They also said the country would attempt another launch in the future. On May 31, the Chollima-1 rocket, carrying the satellite, was successfully launched, but its second stage failed to function, plunging the rocket into the sea. The launch triggered an emergency evacuation alert in South Korea, but this was later withdrawn by officials.

The satellite would have been North Korea’s first official spy satellite, which experts say would have given it the ability to monitor the Korean peninsula. The country has already launched satellites, but experts believe they have not sent images to North Korea. The botched launch comes at a time of heightened tensions on the peninsula as North Korea continues to try to develop high-tech weapons and rockets. In response to the launch, South Korea announced new sanctions against hacking group Kimsuky, which is linked to North Korea and is said to have stolen secret information related to space development.

In recent years, Amazon has come under scrutiny for people data controls. The US Federal Trade Commission, with support from the Justice Department, hit the tech giant this week with two settlements over a litany of bankruptcies involving children’s data and its Ring smart home cameras.

In one case, officials say, a former Ring employee spied on female customers in 2017. Amazon bought Ring in 2018 by displaying their videos in their bedrooms and bathrooms. The FTC says Ring has been providing staff with dangerously excessive access to video and has been lax about privacy and security. In a separate statement, the FTC said Amazon maintained recordings of children using its Alexa voice assistant and did not delete the data when parents requested it.

The FTC has ordered Amazon to pay about $30 million in response to the two deals and introduce some new privacy measures. Perhaps more consequentially, the FTC has said that Amazon should delete or destroy Ring records prior to March 2018, as well as any models or algorithms developed from the improperly collected data. The ordinance must be approved by a judge before being implemented. Amazon said it disagreed with the FTC and denied violating the law, but added that the agreements put these issues behind us.

As companies around the world rush to integrate generative AI systems into their products, the cybersecurity industry is getting in on the action. This week OpenAI, the creator of the ChatGPT and Dall-E text and image generation systems, opened a new program to understand how AI can be best used by cybersecurity professionals. The project offers grants to those developing new systems.

OpenAI has proposed a number of potential projects, ranging from using machine learning to detect social engineering efforts and produce threat intelligence, to inspecting source code for vulnerabilities and developing honeypots to trap hackers. While recent developments in AI have been faster than many experts predicted, AI has been used in the cybersecurity industry for several years, though many claims don’t necessarily live up to the hype.

The US Air Force is moving fast to test AI in flying machines in January, it tested an AI-piloted tactical aircraft. However, a new claim started circulating this week: During a simulated test, an AI-controlled drone began to attack and killed a human operator guarding it, because they were preventing it from achieving its objectives.

The system started to realize that as it identified the threat, sometimes the human operator would tell it not to kill that threat, but it got its points by killing that threat, Colnel Tucker Hamilton said, according to an event summary at Royal Aeronautical Society, in London. Hamilton went on to say that when the system was trained not to kill the operator, he began targeting the communications tower the operator was using to communicate with the drone, preventing his messages from being sent.

However, the US Air Force says the simulation never took place. Spokeswoman Ann Stefanek said the comments were taken out of context and were meant to be anecdotal. Hamilton also clarified that he misspoke and was talking about a thought experiment.

Nonetheless, the scenario described highlights the unintended ways in which automated systems could bend the rules imposed on them to achieve the goals they set out to achieve. Called a game specification by researchers, other cases have seen a simulated version ofTetris pause the game to avoid losing and AI game character killed himself in level one to avoid dying in next level.

#power #scams

Leave a Comment