{"id":33697,"date":"2023-06-06T15:22:38","date_gmt":"2023-06-06T14:22:38","guid":{"rendered":"https:\/\/www.innovationnewsnetwork.com\/?p=33697"},"modified":"2023-06-06T15:24:27","modified_gmt":"2023-06-06T14:24:27","slug":"ai-cannot-replace-human-input-comes-defeating-cybercriminals","status":"publish","type":"post","link":"https:\/\/www.innovationnewsnetwork.com\/ai-cannot-replace-human-input-comes-defeating-cybercriminals\/33697\/","title":{"rendered":"AI cannot replace human input when it comes to defeating cybercriminals"},"content":{"rendered":"
While we grapple with the ethics of AI, large language models (LLMs)<\/a>, and debate how to regulate their use, there\u2019s no doubt that these technologies will enable a higher volume of cyberattacks. This is going to add to the workloads of hard-pressed, under-resourced security teams who will have to ensure their response teams are up to the task.<\/p>\n To protect themselves and keep their information secure, organisations will need to ensure they have a solid combination of tools, technologies, and human input to maintain capabilities that equal their adversaries.<\/p>\n Those who have a hacker mindset are most likely to outwit the criminals.<\/p>\n One immediate way that generative AI can be used to accelerate attacks is through the use of deep fakes and phishing attacks. But in the medium term, LLMs have levelled the playing field between sophisticated threat actors and the mediocre.<\/p>\n In the past, exploiting an SSRF vulnerability or reverse engineering a binary application were specialist activities that only a handful of professionals could master. Today, anyone can ask a chatbot to generate code to achieve sophisticated outcomes that were previously too difficult for most people to control.<\/p>\n AI stacks also add to the attack surface. As well as needing to protect the models themselves, companies will need to consider how to test for emerging vulnerabilities such as prompt injection, inadequate sandboxing, and training data poisoning. The Open Web Application Security Project (OWASP) last month released version 0.1 of the OWASP Top 10 List for Large Language Models<\/a>.<\/p>\n These new technologies could contribute to both an increase in the volume and sophistication of cyberattacks. This comes at a time when organisations are already under considerable strain.<\/p>\n According to recent research, the UK has the highest number of cybercrime victims per million internet users<\/a> in 2022 at 4,783 \u2013 that\u2019s up 40% since 2020.<\/em><\/p>\n Perhaps even more worrying is the fact that one-third of organisations say they monitor less than 75% of their attack surface<\/a> and 20% believe over half is unknown or unobservable<\/a>.<\/p>\n It\u2019s these unobserved blind spots where unknown vulnerabilities lurk that represent the greatest risk for organisations and the biggest opportunity for criminals.<\/p>\nCybercrimes are still increasing<\/h3>\n